Title: [250806] trunk
Revision
250806
Author
sbar...@apple.com
Date
2019-10-07 16:34:01 -0700 (Mon, 07 Oct 2019)

Log Message

Allow OSR exit to the LLInt
https://bugs.webkit.org/show_bug.cgi?id=197993

Reviewed by Tadeu Zagallo.

JSTests:

* stress/exit-from-getter-by-val.js: Added.
* stress/exit-from-setter-by-val.js: Added.

Source/_javascript_Core:

This patch makes it so we can OSR exit to the LLInt.
Here are the interesting implementation details:

1. We no longer baseline compile everything in the inline stack.

2. When the top frame is a LLInt frame, we exit to the corresponding
LLInt bytecode. However, we need to materialize the LLInt registers
for PC, PB, and metadata.

3. When dealing with inline call frames where the caller is LLInt, we
need to return to the appropriate place. Let's consider we're exiting
at a place A->B (A calls B), where A is LLInt. If A is a normal call,
we place the return PC in the frame we materialize to B to be right
after the LLInt's inline cache for calls. If A is a varargs call, we place
it at the return location for vararg calls. The interesting scenario here
is where A is a getter/setter. This means that A might be get_by_id,
get_by_val, put_by_id, or put_by_val. Since the LLInt does not have any
form of IC for getters/setters, we make this work by creating new LLInt
"return location" stubs for these opcodes.

4. We need to update what callee saves we store in the callee if the caller frame
is a LLInt frame. Let's consider an inline stack A->B->C, where A is a LLInt frame.
When we materialize the stack frame for B, we need to ensure that the LLInt callee
saves that A uses is stored into B's preserved callee saves. Specifically, this
is just the PB/metadata registers.

This patch also fixes offlineasm's macro expansion to allow us to
use computed label names for global labels.

In a future bug, I'm going to investigate some kind of control system for
throwing away baseline code when we tier up:
https://bugs.webkit.org/show_bug.cgi?id=202503

* _javascript_Core.xcodeproj/project.pbxproj:
* Sources.txt:
* bytecode/CodeBlock.h:
(JSC::CodeBlock::metadataTable):
(JSC::CodeBlock::instructionsRawPointer):
* dfg/DFGOSRExit.cpp:
(JSC::DFG::OSRExit::executeOSRExit):
(JSC::DFG::reifyInlinedCallFrames):
(JSC::DFG::adjustAndJumpToTarget):
(JSC::DFG::OSRExit::compileOSRExit):
* dfg/DFGOSRExit.h:
(JSC::DFG::OSRExitState::OSRExitState):
* dfg/DFGOSRExitCompilerCommon.cpp:
(JSC::DFG::callerReturnPC):
(JSC::DFG::calleeSaveSlot):
(JSC::DFG::reifyInlinedCallFrames):
(JSC::DFG::adjustAndJumpToTarget):
* dfg/DFGOSRExitCompilerCommon.h:
* dfg/DFGOSRExitPreparation.cpp:
(JSC::DFG::prepareCodeOriginForOSRExit): Deleted.
* dfg/DFGOSRExitPreparation.h:
* ftl/FTLOSRExitCompiler.cpp:
(JSC::FTL::compileFTLOSRExit):
* llint/LLIntData.h:
(JSC::LLInt::getCodePtr):
* llint/LowLevelInterpreter.asm:
* llint/LowLevelInterpreter32_64.asm:
* llint/LowLevelInterpreter64.asm:
* offlineasm/asm.rb:
* offlineasm/transform.rb:
* runtime/OptionsList.h:

Tools:

* Scripts/run-jsc-stress-tests:

Modified Paths

Added Paths

Removed Paths

Diff

Modified: trunk/JSTests/ChangeLog (250805 => 250806)


--- trunk/JSTests/ChangeLog	2019-10-07 23:20:40 UTC (rev 250805)
+++ trunk/JSTests/ChangeLog	2019-10-07 23:34:01 UTC (rev 250806)
@@ -1,3 +1,13 @@
+2019-10-07  Saam Barati  <sbar...@apple.com>
+
+        Allow OSR exit to the LLInt
+        https://bugs.webkit.org/show_bug.cgi?id=197993
+
+        Reviewed by Tadeu Zagallo.
+
+        * stress/exit-from-getter-by-val.js: Added.
+        * stress/exit-from-setter-by-val.js: Added.
+
 2019-10-07  Matt Lewis  <jlew...@apple.com>
 
         Unreviewed, rolling out r250750.

Added: trunk/JSTests/stress/exit-from-getter-by-val.js (0 => 250806)


--- trunk/JSTests/stress/exit-from-getter-by-val.js	                        (rev 0)
+++ trunk/JSTests/stress/exit-from-getter-by-val.js	2019-10-07 23:34:01 UTC (rev 250806)
@@ -0,0 +1,25 @@
+function field() { return "f"; }
+noInline(field);
+
+(function() {
+    var o = {_f:42};
+    o.__defineGetter__("f", function() { return this._f * 100; });
+    var result = 0;
+    var n = 50000;
+    function foo(o) {
+        return o[field()] + 11;
+    }
+    noInline(foo);
+    for (var i = 0; i < n; ++i) {
+        result += foo(o);
+    }
+    if (result != n * (42 * 100 + 11))
+        throw "Error: bad result: " + result;
+    o._f = 1000000000;
+    result = 0;
+    for (var i = 0; i < n; ++i) {
+        result += foo(o);
+    }
+    if (result != n * (1000000000 * 100 + 11))
+        throw "Error: bad result (2): " + result;
+})();

Added: trunk/JSTests/stress/exit-from-setter-by-val.js (0 => 250806)


--- trunk/JSTests/stress/exit-from-setter-by-val.js	                        (rev 0)
+++ trunk/JSTests/stress/exit-from-setter-by-val.js	2019-10-07 23:34:01 UTC (rev 250806)
@@ -0,0 +1,27 @@
+function field() { return "f"; }
+noInline(field);
+
+(function() {
+    var o = {_f:42};
+    o.__defineSetter__("f", function(value) { this._f = value * 100; });
+    var n = 50000;
+    function foo(o_, v_) {
+        let f = field();
+        var o = o_[f];
+        var v = v_[f];
+        o[f] = v;
+        o[f] = v + 1;
+    }
+    noInline(foo);
+    for (var i = 0; i < n; ++i) {
+        foo({f:o}, {f:11});
+    }
+    if (o._f != (11 + 1) * 100)
+        throw "Error: bad o._f: " + o._f;
+    for (var i = 0; i < n; ++i) {
+        foo({f:o}, {f:1000000000});
+    }
+    if (o._f != 100 * (1000000000 + 1))
+        throw "Error: bad o._f (2): " + o._f;
+})();
+

Modified: trunk/Source/_javascript_Core/ChangeLog (250805 => 250806)


--- trunk/Source/_javascript_Core/ChangeLog	2019-10-07 23:20:40 UTC (rev 250805)
+++ trunk/Source/_javascript_Core/ChangeLog	2019-10-07 23:34:01 UTC (rev 250806)
@@ -1,3 +1,75 @@
+2019-10-07  Saam Barati  <sbar...@apple.com>
+
+        Allow OSR exit to the LLInt
+        https://bugs.webkit.org/show_bug.cgi?id=197993
+
+        Reviewed by Tadeu Zagallo.
+
+        This patch makes it so we can OSR exit to the LLInt.
+        Here are the interesting implementation details:
+        
+        1. We no longer baseline compile everything in the inline stack.
+        
+        2. When the top frame is a LLInt frame, we exit to the corresponding
+        LLInt bytecode. However, we need to materialize the LLInt registers
+        for PC, PB, and metadata.
+        
+        3. When dealing with inline call frames where the caller is LLInt, we
+        need to return to the appropriate place. Let's consider we're exiting
+        at a place A->B (A calls B), where A is LLInt. If A is a normal call,
+        we place the return PC in the frame we materialize to B to be right
+        after the LLInt's inline cache for calls. If A is a varargs call, we place
+        it at the return location for vararg calls. The interesting scenario here
+        is where A is a getter/setter. This means that A might be get_by_id,
+        get_by_val, put_by_id, or put_by_val. Since the LLInt does not have any
+        form of IC for getters/setters, we make this work by creating new LLInt
+        "return location" stubs for these opcodes.
+        
+        4. We need to update what callee saves we store in the callee if the caller frame
+        is a LLInt frame. Let's consider an inline stack A->B->C, where A is a LLInt frame.
+        When we materialize the stack frame for B, we need to ensure that the LLInt callee
+        saves that A uses is stored into B's preserved callee saves. Specifically, this
+        is just the PB/metadata registers.
+        
+        This patch also fixes offlineasm's macro expansion to allow us to
+        use computed label names for global labels.
+        
+        In a future bug, I'm going to investigate some kind of control system for
+        throwing away baseline code when we tier up:
+        https://bugs.webkit.org/show_bug.cgi?id=202503
+
+        * _javascript_Core.xcodeproj/project.pbxproj:
+        * Sources.txt:
+        * bytecode/CodeBlock.h:
+        (JSC::CodeBlock::metadataTable):
+        (JSC::CodeBlock::instructionsRawPointer):
+        * dfg/DFGOSRExit.cpp:
+        (JSC::DFG::OSRExit::executeOSRExit):
+        (JSC::DFG::reifyInlinedCallFrames):
+        (JSC::DFG::adjustAndJumpToTarget):
+        (JSC::DFG::OSRExit::compileOSRExit):
+        * dfg/DFGOSRExit.h:
+        (JSC::DFG::OSRExitState::OSRExitState):
+        * dfg/DFGOSRExitCompilerCommon.cpp:
+        (JSC::DFG::callerReturnPC):
+        (JSC::DFG::calleeSaveSlot):
+        (JSC::DFG::reifyInlinedCallFrames):
+        (JSC::DFG::adjustAndJumpToTarget):
+        * dfg/DFGOSRExitCompilerCommon.h:
+        * dfg/DFGOSRExitPreparation.cpp:
+        (JSC::DFG::prepareCodeOriginForOSRExit): Deleted.
+        * dfg/DFGOSRExitPreparation.h:
+        * ftl/FTLOSRExitCompiler.cpp:
+        (JSC::FTL::compileFTLOSRExit):
+        * llint/LLIntData.h:
+        (JSC::LLInt::getCodePtr):
+        * llint/LowLevelInterpreter.asm:
+        * llint/LowLevelInterpreter32_64.asm:
+        * llint/LowLevelInterpreter64.asm:
+        * offlineasm/asm.rb:
+        * offlineasm/transform.rb:
+        * runtime/OptionsList.h:
+
 2019-10-07  Yusuke Suzuki  <ysuz...@apple.com>
 
         [JSC] Change signature of HostFunction to (JSGlobalObject*, CallFrame*)

Modified: trunk/Source/_javascript_Core/_javascript_Core.xcodeproj/project.pbxproj (250805 => 250806)


--- trunk/Source/_javascript_Core/_javascript_Core.xcodeproj/project.pbxproj	2019-10-07 23:20:40 UTC (rev 250805)
+++ trunk/Source/_javascript_Core/_javascript_Core.xcodeproj/project.pbxproj	2019-10-07 23:34:01 UTC (rev 250806)
@@ -182,7 +182,6 @@
 		0F235BE017178E1C00690C7F /* FTLOSRExitCompiler.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F235BCA17178E1C00690C7F /* FTLOSRExitCompiler.h */; settings = {ATTRIBUTES = (Private, ); }; };
 		0F235BE217178E1C00690C7F /* FTLThunks.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F235BCC17178E1C00690C7F /* FTLThunks.h */; settings = {ATTRIBUTES = (Private, ); }; };
 		0F235BEC17178E7300690C7F /* DFGOSRExitBase.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F235BE817178E7300690C7F /* DFGOSRExitBase.h */; };
-		0F235BEE17178E7300690C7F /* DFGOSRExitPreparation.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F235BEA17178E7300690C7F /* DFGOSRExitPreparation.h */; };
 		0F24E54117EA9F5900ABB217 /* AssemblyHelpers.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F24E53C17EA9F5900ABB217 /* AssemblyHelpers.h */; settings = {ATTRIBUTES = (Private, ); }; };
 		0F24E54217EA9F5900ABB217 /* CCallHelpers.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F24E53D17EA9F5900ABB217 /* CCallHelpers.h */; settings = {ATTRIBUTES = (Private, ); }; };
 		0F24E54317EA9F5900ABB217 /* FPRInfo.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F24E53E17EA9F5900ABB217 /* FPRInfo.h */; settings = {ATTRIBUTES = (Private, ); }; };
@@ -2296,8 +2295,6 @@
 		0F235BCC17178E1C00690C7F /* FTLThunks.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = FTLThunks.h; path = ftl/FTLThunks.h; sourceTree = "<group>"; };
 		0F235BE717178E7300690C7F /* DFGOSRExitBase.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGOSRExitBase.cpp; path = dfg/DFGOSRExitBase.cpp; sourceTree = "<group>"; };
 		0F235BE817178E7300690C7F /* DFGOSRExitBase.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGOSRExitBase.h; path = dfg/DFGOSRExitBase.h; sourceTree = "<group>"; };
-		0F235BE917178E7300690C7F /* DFGOSRExitPreparation.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGOSRExitPreparation.cpp; path = dfg/DFGOSRExitPreparation.cpp; sourceTree = "<group>"; };
-		0F235BEA17178E7300690C7F /* DFGOSRExitPreparation.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGOSRExitPreparation.h; path = dfg/DFGOSRExitPreparation.h; sourceTree = "<group>"; };
 		0F24E53B17EA9F5900ABB217 /* AssemblyHelpers.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = AssemblyHelpers.cpp; sourceTree = "<group>"; };
 		0F24E53C17EA9F5900ABB217 /* AssemblyHelpers.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = AssemblyHelpers.h; sourceTree = "<group>"; };
 		0F24E53D17EA9F5900ABB217 /* CCallHelpers.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = CCallHelpers.h; sourceTree = "<group>"; };
@@ -7872,8 +7869,6 @@
 				0F392C881B46188400844728 /* DFGOSRExitFuzz.h */,
 				0FEFC9A71681A3B000567F53 /* DFGOSRExitJumpPlaceholder.cpp */,
 				0FEFC9A81681A3B000567F53 /* DFGOSRExitJumpPlaceholder.h */,
-				0F235BE917178E7300690C7F /* DFGOSRExitPreparation.cpp */,
-				0F235BEA17178E7300690C7F /* DFGOSRExitPreparation.h */,
 				0F6237951AE45CA700D402EA /* DFGPhantomInsertionPhase.cpp */,
 				0F6237961AE45CA700D402EA /* DFGPhantomInsertionPhase.h */,
 				0FFFC94F14EF909500C72532 /* DFGPhase.cpp */,
@@ -9239,7 +9234,6 @@
 				0F7025AA1714B0FC00382C0E /* DFGOSRExitCompilerCommon.h in Headers */,
 				0F392C8A1B46188400844728 /* DFGOSRExitFuzz.h in Headers */,
 				0FEFC9AB1681A3B600567F53 /* DFGOSRExitJumpPlaceholder.h in Headers */,
-				0F235BEE17178E7300690C7F /* DFGOSRExitPreparation.h in Headers */,
 				0F6237981AE45CA700D402EA /* DFGPhantomInsertionPhase.h in Headers */,
 				0FFFC95C14EF90AF00C72532 /* DFGPhase.h in Headers */,
 				0F2B9CEB19D0BA7D00B1D1B5 /* DFGPhiChildren.h in Headers */,

Modified: trunk/Source/_javascript_Core/Sources.txt (250805 => 250806)


--- trunk/Source/_javascript_Core/Sources.txt	2019-10-07 23:20:40 UTC (rev 250805)
+++ trunk/Source/_javascript_Core/Sources.txt	2019-10-07 23:34:01 UTC (rev 250806)
@@ -382,7 +382,6 @@
 dfg/DFGOSRExitCompilerCommon.cpp
 dfg/DFGOSRExitFuzz.cpp
 dfg/DFGOSRExitJumpPlaceholder.cpp
-dfg/DFGOSRExitPreparation.cpp
 dfg/DFGObjectAllocationSinkingPhase.cpp
 dfg/DFGObjectMaterializationData.cpp
 dfg/DFGOperations.cpp

Modified: trunk/Source/_javascript_Core/bytecode/BytecodeList.rb (250805 => 250806)


--- trunk/Source/_javascript_Core/bytecode/BytecodeList.rb	2019-10-07 23:20:40 UTC (rev 250805)
+++ trunk/Source/_javascript_Core/bytecode/BytecodeList.rb	2019-10-07 23:34:01 UTC (rev 250806)
@@ -1238,5 +1238,13 @@
 op :llint_internal_function_call_trampoline
 op :llint_internal_function_construct_trampoline
 op :handleUncaughtException
+op :op_call_return_location
+op :op_construct_return_location
+op :op_call_varargs_slow_return_location
+op :op_construct_varargs_slow_return_location
+op :op_get_by_id_return_location
+op :op_get_by_val_return_location
+op :op_put_by_id_return_location
+op :op_put_by_val_return_location
 
 end_section :NativeHelpers

Modified: trunk/Source/_javascript_Core/bytecode/CodeBlock.h (250805 => 250806)


--- trunk/Source/_javascript_Core/bytecode/CodeBlock.h	2019-10-07 23:20:40 UTC (rev 250805)
+++ trunk/Source/_javascript_Core/bytecode/CodeBlock.h	2019-10-07 23:34:01 UTC (rev 250806)
@@ -892,6 +892,9 @@
         return m_unlinkedCode->metadataSizeInBytes();
     }
 
+    MetadataTable* metadataTable() { return m_metadata.get(); }
+    const void* instructionsRawPointer() { return m_instructionsRawPointer; }
+
 protected:
     void finalizeLLIntInlineCaches();
 #if ENABLE(JIT)

Modified: trunk/Source/_javascript_Core/bytecode/InlineCallFrame.h (250805 => 250806)


--- trunk/Source/_javascript_Core/bytecode/InlineCallFrame.h	2019-10-07 23:20:40 UTC (rev 250805)
+++ trunk/Source/_javascript_Core/bytecode/InlineCallFrame.h	2019-10-07 23:34:01 UTC (rev 250806)
@@ -241,7 +241,7 @@
 
 inline CodeBlock* baselineCodeBlockForOriginAndBaselineCodeBlock(const CodeOrigin& codeOrigin, CodeBlock* baselineCodeBlock)
 {
-    ASSERT(baselineCodeBlock->jitType() == JITType::BaselineJIT);
+    ASSERT(JITCode::isBaselineCode(baselineCodeBlock->jitType()));
     auto* inlineCallFrame = codeOrigin.inlineCallFrame();
     if (inlineCallFrame)
         return baselineCodeBlockForInlineCallFrame(inlineCallFrame);

Modified: trunk/Source/_javascript_Core/dfg/DFGCapabilities.cpp (250805 => 250806)


--- trunk/Source/_javascript_Core/dfg/DFGCapabilities.cpp	2019-10-07 23:20:40 UTC (rev 250805)
+++ trunk/Source/_javascript_Core/dfg/DFGCapabilities.cpp	2019-10-07 23:34:01 UTC (rev 250806)
@@ -306,6 +306,14 @@
     case llint_internal_function_call_trampoline:
     case llint_internal_function_construct_trampoline:
     case handleUncaughtException:
+    case op_call_return_location:
+    case op_construct_return_location:
+    case op_call_varargs_slow_return_location:
+    case op_construct_varargs_slow_return_location:
+    case op_get_by_id_return_location:
+    case op_get_by_val_return_location:
+    case op_put_by_id_return_location:
+    case op_put_by_val_return_location:
         return CannotCompile;
     }
     return CannotCompile;

Modified: trunk/Source/_javascript_Core/dfg/DFGOSRExit.cpp (250805 => 250806)


--- trunk/Source/_javascript_Core/dfg/DFGOSRExit.cpp	2019-10-07 23:20:40 UTC (rev 250805)
+++ trunk/Source/_javascript_Core/dfg/DFGOSRExit.cpp	2019-10-07 23:34:01 UTC (rev 250806)
@@ -34,7 +34,6 @@
 #include "DFGGraph.h"
 #include "DFGMayExit.h"
 #include "DFGOSRExitCompilerCommon.h"
-#include "DFGOSRExitPreparation.h"
 #include "DFGOperations.h"
 #include "DFGSpeculativeJIT.h"
 #include "DirectArguments.h"
@@ -372,11 +371,8 @@
         // results will be cached in the OSRExitState record for use of the rest of the
         // exit ramp code.
 
-        // Ensure we have baseline codeBlocks to OSR exit to.
-        prepareCodeOriginForOSRExit(exec, exit.m_codeOrigin);
-
         CodeBlock* baselineCodeBlock = codeBlock->baselineAlternative();
-        ASSERT(baselineCodeBlock->jitType() == JITType::BaselineJIT);
+        ASSERT(JITCode::isBaselineCode(baselineCodeBlock->jitType()));
 
         SpeculationRecovery* recovery = nullptr;
         if (exit.m_recoveryIndex != UINT_MAX) {
@@ -406,12 +402,20 @@
         adjustedThreshold = BaselineExecutionCounter::clippedThreshold(codeBlock->globalObject(), adjustedThreshold);
 
         CodeBlock* codeBlockForExit = baselineCodeBlockForOriginAndBaselineCodeBlock(exit.m_codeOrigin, baselineCodeBlock);
-        const JITCodeMap& codeMap = codeBlockForExit->jitCodeMap();
-        CodeLocationLabel<JSEntryPtrTag> codeLocation = codeMap.find(exit.m_codeOrigin.bytecodeIndex());
-        ASSERT(codeLocation);
+        bool exitToLLInt = Options::forceOSRExitToLLInt() || codeBlockForExit->jitType() == JITType::InterpreterThunk;
+        void* jumpTarget;
+        if (exitToLLInt) {
+            unsigned bytecodeOffset = exit.m_codeOrigin.bytecodeIndex();
+            const Instruction& currentInstruction = *codeBlockForExit->instructions().at(bytecodeOffset).ptr();
+            MacroAssemblerCodePtr<JSEntryPtrTag> destination = LLInt::getCodePtr<JSEntryPtrTag>(currentInstruction);
+            jumpTarget = destination.executableAddress();    
+        } else {
+            const JITCodeMap& codeMap = codeBlockForExit->jitCodeMap();
+            CodeLocationLabel<JSEntryPtrTag> codeLocation = codeMap.find(exit.m_codeOrigin.bytecodeIndex());
+            ASSERT(codeLocation);
+            jumpTarget = codeLocation.executableAddress();
+        }
 
-        void* jumpTarget = codeLocation.executableAddress();
-
         // Compute the value recoveries.
         Operands<ValueRecovery> operands;
         Vector<UndefinedOperandSpan> undefinedOperandSpans;
@@ -418,7 +422,7 @@
         dfgJITCode->variableEventStream.reconstruct(codeBlock, exit.m_codeOrigin, dfgJITCode->minifiedDFG, exit.m_streamIndex, operands, &undefinedOperandSpans);
         ptrdiff_t stackPointerOffset = -static_cast<ptrdiff_t>(codeBlock->jitCode()->dfgCommon()->requiredRegisterCountForExit) * sizeof(Register);
 
-        exit.exitState = adoptRef(new OSRExitState(exit, codeBlock, baselineCodeBlock, operands, WTFMove(undefinedOperandSpans), recovery, stackPointerOffset, activeThreshold, adjustedThreshold, jumpTarget, arrayProfile));
+        exit.exitState = adoptRef(new OSRExitState(exit, codeBlock, baselineCodeBlock, operands, WTFMove(undefinedOperandSpans), recovery, stackPointerOffset, activeThreshold, adjustedThreshold, jumpTarget, arrayProfile, exitToLLInt));
 
         if (UNLIKELY(vm.m_perBytecodeProfiler && codeBlock->jitCode()->dfgCommon()->compilation)) {
             Profiler::Database& database = *vm.m_perBytecodeProfiler;
@@ -446,7 +450,7 @@
 
     OSRExitState& exitState = *exit.exitState.get();
     CodeBlock* baselineCodeBlock = exitState.baselineCodeBlock;
-    ASSERT(baselineCodeBlock->jitType() == JITType::BaselineJIT);
+    ASSERT(JITCode::isBaselineCode(baselineCodeBlock->jitType()));
 
     Operands<ValueRecovery>& operands = exitState.operands;
     Vector<UndefinedOperandSpan>& undefinedOperandSpans = exitState.undefinedOperandSpans;
@@ -757,7 +761,7 @@
     // FIXME: We shouldn't leave holes on the stack when performing an OSR exit
     // in presence of inlined tail calls.
     // https://bugs.webkit.org/show_bug.cgi?id=147511
-    ASSERT(outermostBaselineCodeBlock->jitType() == JITType::BaselineJIT);
+    ASSERT(JITCode::isBaselineCode(outermostBaselineCodeBlock->jitType()));
     frame.setOperand<CodeBlock*>(CallFrameSlot::codeBlock, outermostBaselineCodeBlock);
 
     const CodeOrigin* codeOrigin;
@@ -768,6 +772,8 @@
         CodeOrigin* trueCaller = inlineCallFrame->getCallerSkippingTailCalls(&trueCallerCallKind);
         void* callerFrame = cpu.fp();
 
+        bool callerIsLLInt = false;
+
         if (!trueCaller) {
             ASSERT(inlineCallFrame->isTail());
             void* returnPC = frame.get<void*>(CallFrame::returnPCOffset());
@@ -781,46 +787,16 @@
         } else {
             CodeBlock* baselineCodeBlockForCaller = baselineCodeBlockForOriginAndBaselineCodeBlock(*trueCaller, outermostBaselineCodeBlock);
             unsigned callBytecodeIndex = trueCaller->bytecodeIndex();
-            MacroAssemblerCodePtr<JSInternalPtrTag> jumpTarget;
+            void* jumpTarget = callerReturnPC(baselineCodeBlockForCaller, callBytecodeIndex, trueCallerCallKind, callerIsLLInt);
 
-            switch (trueCallerCallKind) {
-            case InlineCallFrame::Call:
-            case InlineCallFrame::Construct:
-            case InlineCallFrame::CallVarargs:
-            case InlineCallFrame::ConstructVarargs:
-            case InlineCallFrame::TailCall:
-            case InlineCallFrame::TailCallVarargs: {
-                CallLinkInfo* callLinkInfo =
-                    baselineCodeBlockForCaller->getCallLinkInfoForBytecodeIndex(callBytecodeIndex);
-                RELEASE_ASSERT(callLinkInfo);
-
-                jumpTarget = callLinkInfo->callReturnLocation();
-                break;
-            }
-
-            case InlineCallFrame::GetterCall:
-            case InlineCallFrame::SetterCall: {
-                StructureStubInfo* stubInfo =
-                    baselineCodeBlockForCaller->findStubInfo(CodeOrigin(callBytecodeIndex));
-                RELEASE_ASSERT(stubInfo);
-
-                jumpTarget = stubInfo->doneLocation();
-                break;
-            }
-
-            default:
-                RELEASE_ASSERT_NOT_REACHED();
-            }
-
             if (trueCaller->inlineCallFrame())
                 callerFrame = cpu.fp<uint8_t*>() + trueCaller->inlineCallFrame()->stackOffset * sizeof(EncodedJSValue);
 
-            void* targetAddress = jumpTarget.executableAddress();
 #if CPU(ARM64E)
             void* newEntrySP = cpu.fp<uint8_t*>() + inlineCallFrame->returnPCOffset() + sizeof(void*);
-            targetAddress = retagCodePtr(targetAddress, JSInternalPtrTag, bitwise_cast<PtrTag>(newEntrySP));
+            jumpTarget = tagCodePtr(jumpTarget, bitwise_cast<PtrTag>(newEntrySP));
 #endif
-            frame.set<void*>(inlineCallFrame->returnPCOffset(), targetAddress);
+            frame.set<void*>(inlineCallFrame->returnPCOffset(), jumpTarget);
         }
 
         frame.setOperand<void*>(inlineCallFrame->stackOffset + CallFrameSlot::codeBlock, baselineCodeBlock);
@@ -830,6 +806,14 @@
         // copy the prior contents of the tag registers already saved for the outer frame to this frame.
         saveOrCopyCalleeSavesFor(context, baselineCodeBlock, VirtualRegister(inlineCallFrame->stackOffset), !trueCaller);
 
+        if (callerIsLLInt) {
+            CodeBlock* baselineCodeBlockForCaller = baselineCodeBlockForOriginAndBaselineCodeBlock(*trueCaller, outermostBaselineCodeBlock);
+            frame.set<const void*>(calleeSaveSlot(inlineCallFrame, baselineCodeBlock, LLInt::Registers::metadataTableGPR).offset, baselineCodeBlockForCaller->metadataTable());
+#if USE(JSVALUE64)
+            frame.set<const void*>(calleeSaveSlot(inlineCallFrame, baselineCodeBlock, LLInt::Registers::pbGPR).offset, baselineCodeBlockForCaller->instructionsRawPointer());
+#endif
+        }
+
         if (!inlineCallFrame->isVarargs())
             frame.setOperand<uint32_t>(inlineCallFrame->stackOffset + CallFrameSlot::argumentCount, PayloadOffset, inlineCallFrame->argumentCountIncludingThis);
         ASSERT(callerFrame);
@@ -894,6 +878,24 @@
     }
 
     vm.topCallFrame = context.fp<ExecState*>();
+
+    if (exitState->isJumpToLLInt) {
+        CodeBlock* codeBlockForExit = baselineCodeBlockForOriginAndBaselineCodeBlock(exit.m_codeOrigin, baselineCodeBlock);
+        unsigned bytecodeOffset = exit.m_codeOrigin.bytecodeIndex();
+        const Instruction& currentInstruction = *codeBlockForExit->instructions().at(bytecodeOffset).ptr();
+
+        context.gpr(LLInt::Registers::metadataTableGPR) = bitwise_cast<uintptr_t>(codeBlockForExit->metadataTable());
+#if USE(JSVALUE64)
+        context.gpr(LLInt::Registers::pbGPR) = bitwise_cast<uintptr_t>(codeBlockForExit->instructionsRawPointer());
+        context.gpr(LLInt::Registers::pcGPR) = static_cast<uintptr_t>(exit.m_codeOrigin.bytecodeIndex());
+#else
+        context.gpr(LLInt::Registers::pcGPR) = bitwise_cast<uintptr_t>(&currentInstruction);
+#endif
+
+        if (exit.isExceptionHandler())
+            vm.targetInterpreterPCForThrow = &currentInstruction;
+    }
+
     context.pc() = untagCodePtr<JSEntryPtrTag>(jumpTarget);
 }
 
@@ -1052,8 +1054,6 @@
     ASSERT(!vm.callFrameForCatch || exit.m_kind == GenericUnwind);
     EXCEPTION_ASSERT_UNUSED(scope, !!scope.exception() || !exit.isExceptionHandler());
     
-    prepareCodeOriginForOSRExit(exec, exit.m_codeOrigin);
-
     // Compute the value recoveries.
     Operands<ValueRecovery> operands;
     codeBlock->jitCode()->dfg()->variableEventStream.reconstruct(codeBlock, exit.m_codeOrigin, codeBlock->jitCode()->dfg()->minifiedDFG, exit.m_streamIndex, operands);

Modified: trunk/Source/_javascript_Core/dfg/DFGOSRExit.h (250805 => 250806)


--- trunk/Source/_javascript_Core/dfg/DFGOSRExit.h	2019-10-07 23:20:40 UTC (rev 250805)
+++ trunk/Source/_javascript_Core/dfg/DFGOSRExit.h	2019-10-07 23:34:01 UTC (rev 250806)
@@ -106,7 +106,7 @@
 enum class ExtraInitializationLevel;
 
 struct OSRExitState : RefCounted<OSRExitState> {
-    OSRExitState(OSRExitBase& exit, CodeBlock* codeBlock, CodeBlock* baselineCodeBlock, Operands<ValueRecovery>& operands, Vector<UndefinedOperandSpan>&& undefinedOperandSpans, SpeculationRecovery* recovery, ptrdiff_t stackPointerOffset, int32_t activeThreshold, double memoryUsageAdjustedThreshold, void* jumpTarget, ArrayProfile* arrayProfile)
+    OSRExitState(OSRExitBase& exit, CodeBlock* codeBlock, CodeBlock* baselineCodeBlock, Operands<ValueRecovery>& operands, Vector<UndefinedOperandSpan>&& undefinedOperandSpans, SpeculationRecovery* recovery, ptrdiff_t stackPointerOffset, int32_t activeThreshold, double memoryUsageAdjustedThreshold, void* jumpTarget, ArrayProfile* arrayProfile, bool isJumpToLLInt)
         : exit(exit)
         , codeBlock(codeBlock)
         , baselineCodeBlock(baselineCodeBlock)
@@ -118,6 +118,7 @@
         , memoryUsageAdjustedThreshold(memoryUsageAdjustedThreshold)
         , jumpTarget(jumpTarget)
         , arrayProfile(arrayProfile)
+        , isJumpToLLInt(isJumpToLLInt)
     { }
 
     OSRExitBase& exit;
@@ -131,6 +132,7 @@
     double memoryUsageAdjustedThreshold;
     void* jumpTarget;
     ArrayProfile* arrayProfile;
+    bool isJumpToLLInt;
 
     ExtraInitializationLevel extraInitializationLevel;
     Profiler::OSRExit* profilerExit { nullptr };

Modified: trunk/Source/_javascript_Core/dfg/DFGOSRExitCompilerCommon.cpp (250805 => 250806)


--- trunk/Source/_javascript_Core/dfg/DFGOSRExitCompilerCommon.cpp	2019-10-07 23:20:40 UTC (rev 250805)
+++ trunk/Source/_javascript_Core/dfg/DFGOSRExitCompilerCommon.cpp	2019-10-07 23:34:01 UTC (rev 250806)
@@ -28,11 +28,13 @@
 
 #if ENABLE(DFG_JIT)
 
+#include "Bytecodes.h"
 #include "DFGJITCode.h"
 #include "DFGOperations.h"
 #include "JIT.h"
 #include "JSCJSValueInlines.h"
 #include "JSCInlines.h"
+#include "LLIntData.h"
 #include "StructureStubInfo.h"
 
 namespace JSC { namespace DFG {
@@ -136,12 +138,105 @@
     doneAdjusting.link(&jit);
 }
 
+void* callerReturnPC(CodeBlock* baselineCodeBlockForCaller, unsigned callBytecodeIndex, InlineCallFrame::Kind trueCallerCallKind, bool& callerIsLLInt)
+{
+    callerIsLLInt = Options::forceOSRExitToLLInt() || baselineCodeBlockForCaller->jitType() == JITType::InterpreterThunk;
+
+    void* jumpTarget;
+
+    if (callerIsLLInt) {
+        const Instruction& callInstruction = *baselineCodeBlockForCaller->instructions().at(callBytecodeIndex).ptr();
+#define LLINT_RETURN_LOCATION(name) (callInstruction.isWide16() ? LLInt::getWide16CodePtr<NoPtrTag>(name##_return_location) : (callInstruction.isWide32() ? LLInt::getWide32CodePtr<NoPtrTag>(name##_return_location) : LLInt::getCodePtr<NoPtrTag>(name##_return_location))).executableAddress()
+
+        switch (trueCallerCallKind) {
+        case InlineCallFrame::Call:
+            jumpTarget = LLINT_RETURN_LOCATION(op_call);
+            break;
+        case InlineCallFrame::Construct:
+            jumpTarget = LLINT_RETURN_LOCATION(op_construct);
+            break;
+        case InlineCallFrame::CallVarargs:
+            jumpTarget = LLINT_RETURN_LOCATION(op_call_varargs_slow);
+            break;
+        case InlineCallFrame::ConstructVarargs:
+            jumpTarget = LLINT_RETURN_LOCATION(op_construct_varargs_slow);
+            break;
+        case InlineCallFrame::GetterCall: {
+            if (callInstruction.opcodeID() == op_get_by_id)
+                jumpTarget = LLINT_RETURN_LOCATION(op_get_by_id);
+            else if (callInstruction.opcodeID() == op_get_by_val)
+                jumpTarget = LLINT_RETURN_LOCATION(op_get_by_val);
+            else
+                RELEASE_ASSERT_NOT_REACHED();
+            break;
+        }
+        case InlineCallFrame::SetterCall: {
+            if (callInstruction.opcodeID() == op_put_by_id)
+                jumpTarget = LLINT_RETURN_LOCATION(op_put_by_id);
+            else if (callInstruction.opcodeID() == op_put_by_val)
+                jumpTarget = LLINT_RETURN_LOCATION(op_put_by_val);
+            else
+                RELEASE_ASSERT_NOT_REACHED();
+            break;
+        }
+        default:
+            RELEASE_ASSERT_NOT_REACHED();
+        }
+
+#undef LLINT_RETURN_LOCATION
+
+    } else {
+        switch (trueCallerCallKind) {
+        case InlineCallFrame::Call:
+        case InlineCallFrame::Construct:
+        case InlineCallFrame::CallVarargs:
+        case InlineCallFrame::ConstructVarargs: {
+            CallLinkInfo* callLinkInfo =
+                baselineCodeBlockForCaller->getCallLinkInfoForBytecodeIndex(callBytecodeIndex);
+            RELEASE_ASSERT(callLinkInfo);
+
+            jumpTarget = callLinkInfo->callReturnLocation().untaggedExecutableAddress();
+            break;
+        }
+
+        case InlineCallFrame::GetterCall:
+        case InlineCallFrame::SetterCall: {
+            StructureStubInfo* stubInfo =
+                baselineCodeBlockForCaller->findStubInfo(CodeOrigin(callBytecodeIndex));
+            RELEASE_ASSERT(stubInfo);
+
+            jumpTarget = stubInfo->doneLocation().untaggedExecutableAddress();
+            break;
+        }
+
+        default:
+            RELEASE_ASSERT_NOT_REACHED();
+        }
+    }
+
+    return jumpTarget;
+}
+
+CCallHelpers::Address calleeSaveSlot(InlineCallFrame* inlineCallFrame, CodeBlock* baselineCodeBlock, GPRReg calleeSave)
+{
+    const RegisterAtOffsetList* calleeSaves = baselineCodeBlock->calleeSaveRegisters();
+    for (unsigned i = 0; i < calleeSaves->size(); i++) {
+        RegisterAtOffset entry = calleeSaves->at(i);
+        if (entry.reg() != calleeSave)
+            continue;
+        return CCallHelpers::Address(CCallHelpers::framePointerRegister, static_cast<VirtualRegister>(inlineCallFrame->stackOffset).offsetInBytes() + entry.offset());
+    }
+
+    RELEASE_ASSERT_NOT_REACHED();
+    return CCallHelpers::Address(CCallHelpers::framePointerRegister);
+}
+
 void reifyInlinedCallFrames(CCallHelpers& jit, const OSRExitBase& exit)
 {
     // FIXME: We shouldn't leave holes on the stack when performing an OSR exit
     // in presence of inlined tail calls.
     // https://bugs.webkit.org/show_bug.cgi?id=147511
-    ASSERT(jit.baselineCodeBlock()->jitType() == JITType::BaselineJIT);
+    ASSERT(JITCode::isBaselineCode(jit.baselineCodeBlock()->jitType()));
     jit.storePtr(AssemblyHelpers::TrustedImmPtr(jit.baselineCodeBlock()), AssemblyHelpers::addressFor((VirtualRegister)CallFrameSlot::codeBlock));
 
     const CodeOrigin* codeOrigin;
@@ -152,6 +247,8 @@
         CodeOrigin* trueCaller = inlineCallFrame->getCallerSkippingTailCalls(&trueCallerCallKind);
         GPRReg callerFrameGPR = GPRInfo::callFrameRegister;
 
+        bool callerIsLLInt = false;
+
         if (!trueCaller) {
             ASSERT(inlineCallFrame->isTail());
             jit.loadPtr(AssemblyHelpers::Address(GPRInfo::callFrameRegister, CallFrame::returnPCOffset()), GPRInfo::regT3);
@@ -167,37 +264,8 @@
         } else {
             CodeBlock* baselineCodeBlockForCaller = jit.baselineCodeBlockFor(*trueCaller);
             unsigned callBytecodeIndex = trueCaller->bytecodeIndex();
-            void* jumpTarget = nullptr;
+            void* jumpTarget = callerReturnPC(baselineCodeBlockForCaller, callBytecodeIndex, trueCallerCallKind, callerIsLLInt);
 
-            switch (trueCallerCallKind) {
-            case InlineCallFrame::Call:
-            case InlineCallFrame::Construct:
-            case InlineCallFrame::CallVarargs:
-            case InlineCallFrame::ConstructVarargs:
-            case InlineCallFrame::TailCall:
-            case InlineCallFrame::TailCallVarargs: {
-                CallLinkInfo* callLinkInfo =
-                    baselineCodeBlockForCaller->getCallLinkInfoForBytecodeIndex(callBytecodeIndex);
-                RELEASE_ASSERT(callLinkInfo);
-
-                jumpTarget = callLinkInfo->callReturnLocation().untaggedExecutableAddress();
-                break;
-            }
-
-            case InlineCallFrame::GetterCall:
-            case InlineCallFrame::SetterCall: {
-                StructureStubInfo* stubInfo =
-                    baselineCodeBlockForCaller->findStubInfo(CodeOrigin(callBytecodeIndex));
-                RELEASE_ASSERT(stubInfo);
-
-                jumpTarget = stubInfo->doneLocation().untaggedExecutableAddress();
-                break;
-            }
-
-            default:
-                RELEASE_ASSERT_NOT_REACHED();
-            }
-
             if (trueCaller->inlineCallFrame()) {
                 jit.addPtr(
                     AssemblyHelpers::TrustedImm32(trueCaller->inlineCallFrame()->stackOffset * sizeof(EncodedJSValue)),
@@ -227,6 +295,14 @@
             trueCaller ? AssemblyHelpers::UseExistingTagRegisterContents : AssemblyHelpers::CopyBaselineCalleeSavedRegistersFromBaseFrame,
             GPRInfo::regT2);
 
+        if (callerIsLLInt) {
+            CodeBlock* baselineCodeBlockForCaller = jit.baselineCodeBlockFor(*trueCaller);
+            jit.storePtr(CCallHelpers::TrustedImmPtr(baselineCodeBlockForCaller->metadataTable()), calleeSaveSlot(inlineCallFrame, baselineCodeBlock, LLInt::Registers::metadataTableGPR));
+#if USE(JSVALUE64)
+            jit.storePtr(CCallHelpers::TrustedImmPtr(baselineCodeBlockForCaller->instructionsRawPointer()), calleeSaveSlot(inlineCallFrame, baselineCodeBlock, LLInt::Registers::pbGPR));
+#endif
+        }
+
         if (!inlineCallFrame->isVarargs())
             jit.store32(AssemblyHelpers::TrustedImm32(inlineCallFrame->argumentCountIncludingThis), AssemblyHelpers::payloadFor((VirtualRegister)(inlineCallFrame->stackOffset + CallFrameSlot::argumentCount)));
 #if USE(JSVALUE64)
@@ -301,11 +377,35 @@
 
     CodeBlock* codeBlockForExit = jit.baselineCodeBlockFor(exit.m_codeOrigin);
     ASSERT(codeBlockForExit == codeBlockForExit->baselineVersion());
-    ASSERT(codeBlockForExit->jitType() == JITType::BaselineJIT);
-    CodeLocationLabel<JSEntryPtrTag> codeLocation = codeBlockForExit->jitCodeMap().find(exit.m_codeOrigin.bytecodeIndex());
-    ASSERT(codeLocation);
+    ASSERT(JITCode::isBaselineCode(codeBlockForExit->jitType()));
 
-    void* jumpTarget = codeLocation.retagged<OSRExitPtrTag>().executableAddress();
+    void* jumpTarget;
+    bool exitToLLInt = Options::forceOSRExitToLLInt() || codeBlockForExit->jitType() == JITType::InterpreterThunk;
+    if (exitToLLInt) {
+        unsigned bytecodeOffset = exit.m_codeOrigin.bytecodeIndex();
+        const Instruction& currentInstruction = *codeBlockForExit->instructions().at(bytecodeOffset).ptr();
+        MacroAssemblerCodePtr<JSEntryPtrTag> destination = LLInt::getCodePtr<JSEntryPtrTag>(currentInstruction);
+
+        if (exit.isExceptionHandler()) {
+            jit.move(CCallHelpers::TrustedImmPtr(&currentInstruction), GPRInfo::regT2);
+            jit.storePtr(GPRInfo::regT2, &vm.targetInterpreterPCForThrow);
+        }
+
+        jit.move(CCallHelpers::TrustedImmPtr(codeBlockForExit->metadataTable()), LLInt::Registers::metadataTableGPR);
+#if USE(JSVALUE64)
+        jit.move(CCallHelpers::TrustedImmPtr(codeBlockForExit->instructionsRawPointer()), LLInt::Registers::pbGPR);
+        jit.move(CCallHelpers::TrustedImm32(bytecodeOffset), LLInt::Registers::pcGPR);
+#else
+        jit.move(CCallHelpers::TrustedImmPtr(&currentInstruction), LLInt::Registers::pcGPR);
+#endif
+        jumpTarget = destination.retagged<OSRExitPtrTag>().executableAddress();
+    } else {
+        CodeLocationLabel<JSEntryPtrTag> codeLocation = codeBlockForExit->jitCodeMap().find(exit.m_codeOrigin.bytecodeIndex());
+        ASSERT(codeLocation);
+
+        jumpTarget = codeLocation.retagged<OSRExitPtrTag>().executableAddress();
+    }
+
     jit.addPtr(AssemblyHelpers::TrustedImm32(JIT::stackPointerOffsetFor(codeBlockForExit) * sizeof(Register)), GPRInfo::callFrameRegister, AssemblyHelpers::stackPointerRegister);
     if (exit.isExceptionHandler()) {
         // Since we're jumping to op_catch, we need to set callFrameForCatch.

Modified: trunk/Source/_javascript_Core/dfg/DFGOSRExitCompilerCommon.h (250805 => 250806)


--- trunk/Source/_javascript_Core/dfg/DFGOSRExitCompilerCommon.h	2019-10-07 23:20:40 UTC (rev 250805)
+++ trunk/Source/_javascript_Core/dfg/DFGOSRExitCompilerCommon.h	2019-10-07 23:34:01 UTC (rev 250806)
@@ -39,6 +39,8 @@
 void handleExitCounts(CCallHelpers&, const OSRExitBase&);
 void reifyInlinedCallFrames(CCallHelpers&, const OSRExitBase&);
 void adjustAndJumpToTarget(VM&, CCallHelpers&, const OSRExitBase&);
+void* callerReturnPC(CodeBlock* baselineCodeBlockForCaller, unsigned callBytecodeOffset, InlineCallFrame::Kind callerKind, bool& callerIsLLInt);
+CCallHelpers::Address calleeSaveSlot(InlineCallFrame*, CodeBlock* baselineCodeBlock, GPRReg calleeSave);
 
 template <typename JITCodeType>
 void adjustFrameAndStackInOSRExitCompilerThunk(MacroAssembler& jit, VM& vm, JITType jitType)

Deleted: trunk/Source/_javascript_Core/dfg/DFGOSRExitPreparation.cpp (250805 => 250806)


--- trunk/Source/_javascript_Core/dfg/DFGOSRExitPreparation.cpp	2019-10-07 23:20:40 UTC (rev 250805)
+++ trunk/Source/_javascript_Core/dfg/DFGOSRExitPreparation.cpp	2019-10-07 23:34:01 UTC (rev 250806)
@@ -1,53 +0,0 @@
-/*
- * Copyright (C) 2013, 2014 Apple Inc. All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- * 1. Redistributions of source code must retain the above copyright
- *    notice, this list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright
- *    notice, this list of conditions and the following disclaimer in the
- *    documentation and/or other materials provided with the distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
- * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
- * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
- * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
- * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
- * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
- * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
- * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
- */
-
-#include "config.h"
-#include "DFGOSRExitPreparation.h"
-
-#if ENABLE(DFG_JIT)
-
-#include "CodeBlock.h"
-#include "JIT.h"
-#include "JITCode.h"
-#include "JITWorklist.h"
-#include "JSCInlines.h"
-
-namespace JSC { namespace DFG {
-
-void prepareCodeOriginForOSRExit(ExecState* exec, CodeOrigin codeOrigin)
-{
-    VM& vm = exec->vm();
-    DeferGC deferGC(vm.heap);
-    
-    for (; codeOrigin.inlineCallFrame(); codeOrigin = codeOrigin.inlineCallFrame()->directCaller) {
-        CodeBlock* codeBlock = codeOrigin.inlineCallFrame()->baselineCodeBlock.get();
-        JITWorklist::ensureGlobalWorklist().compileNow(codeBlock);
-    }
-}
-
-} } // namespace JSC::DFG
-
-#endif // ENABLE(DFG_JIT)
-

Deleted: trunk/Source/_javascript_Core/dfg/DFGOSRExitPreparation.h (250805 => 250806)


--- trunk/Source/_javascript_Core/dfg/DFGOSRExitPreparation.h	2019-10-07 23:20:40 UTC (rev 250805)
+++ trunk/Source/_javascript_Core/dfg/DFGOSRExitPreparation.h	2019-10-07 23:34:01 UTC (rev 250806)
@@ -1,48 +0,0 @@
-/*
- * Copyright (C) 2013 Apple Inc. All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- * 1. Redistributions of source code must retain the above copyright
- *    notice, this list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright
- *    notice, this list of conditions and the following disclaimer in the
- *    documentation and/or other materials provided with the distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
- * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
- * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
- * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
- * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
- * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
- * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
- * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
- */
-
-#pragma once
-
-#if ENABLE(DFG_JIT)
-
-#include "CallFrame.h"
-#include "CodeOrigin.h"
-
-namespace JSC { namespace DFG {
-
-// Make sure all code on our inline stack is JIT compiled. This is necessary since
-// we may opt to inline a code block even before it had ever been compiled by the
-// JIT, but our OSR exit infrastructure currently only works if the target of the
-// OSR exit is JIT code. This could be changed since there is nothing particularly
-// hard about doing an OSR exit into the interpreter, but for now this seems to make
-// sense in that if we're OSR exiting from inlined code of a DFG code block, then
-// probably it's a good sign that the thing we're exiting into is hot. Even more
-// interestingly, since the code was inlined, it may never otherwise get JIT
-// compiled since the act of inlining it may ensure that it otherwise never runs.
-void prepareCodeOriginForOSRExit(ExecState*, CodeOrigin);
-
-} } // namespace JSC::DFG
-
-#endif // ENABLE(DFG_JIT)

Modified: trunk/Source/_javascript_Core/ftl/FTLOSRExitCompiler.cpp (250805 => 250806)


--- trunk/Source/_javascript_Core/ftl/FTLOSRExitCompiler.cpp	2019-10-07 23:20:40 UTC (rev 250805)
+++ trunk/Source/_javascript_Core/ftl/FTLOSRExitCompiler.cpp	2019-10-07 23:34:01 UTC (rev 250806)
@@ -30,7 +30,6 @@
 
 #include "BytecodeStructs.h"
 #include "DFGOSRExitCompilerCommon.h"
-#include "DFGOSRExitPreparation.h"
 #include "FTLExitArgumentForOperand.h"
 #include "FTLJITCode.h"
 #include "FTLLocation.h"
@@ -544,8 +543,6 @@
         }
     }
 
-    prepareCodeOriginForOSRExit(exec, exit.m_codeOrigin);
-
     compileStub(exitID, jitCode, exit, &vm, codeBlock);
 
     MacroAssembler::repatchJump(

Modified: trunk/Source/_javascript_Core/llint/LLIntData.h (250805 => 250806)


--- trunk/Source/_javascript_Core/llint/LLIntData.h	2019-10-07 23:20:40 UTC (rev 250805)
+++ trunk/Source/_javascript_Core/llint/LLIntData.h	2019-10-07 23:34:01 UTC (rev 250806)
@@ -25,6 +25,8 @@
 
 #pragma once
 
+#include "GPRInfo.h"
+#include "Instruction.h"
 #include "JSCJSValue.h"
 #include "MacroAssemblerCodeRef.h"
 #include "Opcode.h"
@@ -32,7 +34,6 @@
 namespace JSC {
 
 class VM;
-struct Instruction;
 
 #if ENABLE(C_LOOP)
 typedef OpcodeID LLIntCode;
@@ -145,6 +146,16 @@
 }
 
 template<PtrTag tag>
+ALWAYS_INLINE MacroAssemblerCodePtr<tag> getCodePtr(const Instruction& instruction)
+{
+    if (instruction.isWide16())
+        return getWide16CodePtr<tag>(instruction.opcodeID());
+    if (instruction.isWide32())
+        return getWide32CodePtr<tag>(instruction.opcodeID());
+    return getCodePtr<tag>(instruction.opcodeID());
+}
+
+template<PtrTag tag>
 ALWAYS_INLINE MacroAssemblerCodeRef<tag> getCodeRef(OpcodeID opcodeID)
 {
     return MacroAssemblerCodeRef<tag>::createSelfManagedCodeRef(getCodePtr<tag>(opcodeID));
@@ -184,4 +195,23 @@
     return bitwise_cast<void*>(glueHelper);
 }
 
+#if ENABLE(JIT)
+struct Registers {
+    static const GPRReg pcGPR = GPRInfo::regT4;
+
+#if CPU(X86_64) && !OS(WINDOWS)
+    static const GPRReg metadataTableGPR = GPRInfo::regCS1;
+    static const GPRReg pbGPR = GPRInfo::regCS2;
+#elif CPU(X86_64) && OS(WINDOWS)
+    static const GPRReg metadataTableGPR = GPRInfo::regCS3;
+    static const GPRReg pbGPR = GPRInfo::regCS4;
+#elif CPU(ARM64)
+    static const GPRReg metadataTableGPR = GPRInfo::regCS6;
+    static const GPRReg pbGPR = GPRInfo::regCS7;
+#elif CPU(MIPS) || CPU(ARM_THUMB2)
+    static const GPRReg metadataTableGPR = GPRInfo::regCS0;
+#endif
+};
+#endif
+
 } } // namespace JSC::LLInt

Modified: trunk/Source/_javascript_Core/llint/LowLevelInterpreter.asm (250805 => 250806)


--- trunk/Source/_javascript_Core/llint/LowLevelInterpreter.asm	2019-10-07 23:20:40 UTC (rev 250805)
+++ trunk/Source/_javascript_Core/llint/LowLevelInterpreter.asm	2019-10-07 23:34:01 UTC (rev 250806)
@@ -929,12 +929,30 @@
     end
 end
 
-macro callTargetFunction(size, opcodeStruct, dispatch, callee, callPtrTag)
+macro defineOSRExitReturnLabel(opcodeName, size)
+    macro defineNarrow()
+        _%opcodeName%_return_location:
+    end
+
+    macro defineWide16()
+        _%opcodeName%_return_location_wide16:
+    end
+
+    macro defineWide32()
+        _%opcodeName%_return_location_wide32:
+    end
+
+    size(defineNarrow, defineWide16, defineWide32, macro (f) f() end)
+end
+
+macro callTargetFunction(opcodeName, size, opcodeStruct, dispatch, callee, callPtrTag)
     if C_LOOP or C_LOOP_WIN
         cloopCallJSFunction callee
     else
         call callee, callPtrTag
     end
+
+    defineOSRExitReturnLabel(opcodeName, size)
     restoreStackPointerAfterCall()
     dispatchAfterCall(size, opcodeStruct, dispatch)
 end
@@ -1004,7 +1022,7 @@
     jmp callee, callPtrTag
 end
 
-macro slowPathForCall(size, opcodeStruct, dispatch, slowPath, prepareCall)
+macro slowPathForCall(opcodeName, size, opcodeStruct, dispatch, slowPath, prepareCall)
     callCallSlowPath(
         slowPath,
         # Those are r0 and r1
@@ -1013,10 +1031,19 @@
             move calleeFramePtr, sp
             prepareCall(callee, t2, t3, t4, SlowPathPtrTag)
         .dontUpdateSP:
-            callTargetFunction(size, opcodeStruct, dispatch, callee, SlowPathPtrTag)
+            callTargetFunction(%opcodeName%_slow, size, opcodeStruct, dispatch, callee, SlowPathPtrTag)
         end)
 end
 
+macro getterSetterOSRExitReturnPoint(opName, size)
+    crash() # We don't reach this in straight line code. We only reach it via returning to the code below when reconstructing stack frames during OSR exit.
+
+    defineOSRExitReturnLabel(opName, size)
+
+    restoreStackPointerAfterCall()
+    loadi ArgumentCount + TagOffset[cfr], PC
+end
+
 macro arrayProfile(offset, cellAndIndexingType, metadata, scratch)
     const cell = cellAndIndexingType
     const indexingType = cellAndIndexingType 
@@ -1741,7 +1768,7 @@
 callOp(construct, OpConstruct, prepareForRegularCall, macro (getu, metadata) end)
 
 
-macro doCallVarargs(size, opcodeStruct, dispatch, frameSlowPath, slowPath, prepareCall)
+macro doCallVarargs(opcodeName, size, opcodeStruct, dispatch, frameSlowPath, slowPath, prepareCall)
     callSlowPath(frameSlowPath)
     branchIfException(_llint_throw_from_slow_path_trampoline)
     # calleeFrame in r1
@@ -1756,12 +1783,12 @@
             subp r1, CallerFrameAndPCSize, sp
         end
     end
-    slowPathForCall(size, opcodeStruct, dispatch, slowPath, prepareCall)
+    slowPathForCall(opcodeName, size, opcodeStruct, dispatch, slowPath, prepareCall)
 end
 
 
 llintOp(op_call_varargs, OpCallVarargs, macro (size, get, dispatch)
-    doCallVarargs(size, OpCallVarargs, dispatch, _llint_slow_path_size_frame_for_varargs, _llint_slow_path_call_varargs, prepareForRegularCall)
+    doCallVarargs(op_call_varargs, size, OpCallVarargs, dispatch, _llint_slow_path_size_frame_for_varargs, _llint_slow_path_call_varargs, prepareForRegularCall)
 end)
 
 llintOp(op_tail_call_varargs, OpTailCallVarargs, macro (size, get, dispatch)
@@ -1768,7 +1795,7 @@
     checkSwitchToJITForEpilogue()
     # We lie and perform the tail call instead of preparing it since we can't
     # prepare the frame for a call opcode
-    doCallVarargs(size, OpTailCallVarargs, dispatch, _llint_slow_path_size_frame_for_varargs, _llint_slow_path_tail_call_varargs, prepareForTailCall)
+    doCallVarargs(op_tail_call_varargs, size, OpTailCallVarargs, dispatch, _llint_slow_path_size_frame_for_varargs, _llint_slow_path_tail_call_varargs, prepareForTailCall)
 end)
 
 
@@ -1776,12 +1803,12 @@
     checkSwitchToJITForEpilogue()
     # We lie and perform the tail call instead of preparing it since we can't
     # prepare the frame for a call opcode
-    doCallVarargs(size, OpTailCallForwardArguments, dispatch, _llint_slow_path_size_frame_for_forward_arguments, _llint_slow_path_tail_call_forward_arguments, prepareForTailCall)
+    doCallVarargs(op_tail_call_forward_arguments, size, OpTailCallForwardArguments, dispatch, _llint_slow_path_size_frame_for_forward_arguments, _llint_slow_path_tail_call_forward_arguments, prepareForTailCall)
 end)
 
 
 llintOp(op_construct_varargs, OpConstructVarargs, macro (size, get, dispatch)
-    doCallVarargs(size, OpConstructVarargs, dispatch, _llint_slow_path_size_frame_for_varargs, _llint_slow_path_construct_varargs, prepareForRegularCall)
+    doCallVarargs(op_construct_varargs, size, OpConstructVarargs, dispatch, _llint_slow_path_size_frame_for_varargs, _llint_slow_path_construct_varargs, prepareForRegularCall)
 end)
 
 
@@ -1820,6 +1847,7 @@
 
 _llint_op_call_eval:
     slowPathForCall(
+        op_call_eval_narrow,
         narrow,
         OpCallEval,
         macro () dispatchOp(narrow, op_call_eval) end,
@@ -1828,6 +1856,7 @@
 
 _llint_op_call_eval_wide16:
     slowPathForCall(
+        op_call_eval_wide16,
         wide16,
         OpCallEval,
         macro () dispatchOp(wide16, op_call_eval) end,
@@ -1836,6 +1865,7 @@
 
 _llint_op_call_eval_wide32:
     slowPathForCall(
+        op_call_eval_wide32,
         wide32,
         OpCallEval,
         macro () dispatchOp(wide32, op_call_eval) end,

Modified: trunk/Source/_javascript_Core/llint/LowLevelInterpreter32_64.asm (250805 => 250806)


--- trunk/Source/_javascript_Core/llint/LowLevelInterpreter32_64.asm	2019-10-07 23:20:40 UTC (rev 250805)
+++ trunk/Source/_javascript_Core/llint/LowLevelInterpreter32_64.asm	2019-10-07 23:34:01 UTC (rev 250806)
@@ -1403,6 +1403,13 @@
 .opGetByIdSlow:
     callSlowPath(_llint_slow_path_get_by_id)
     dispatch()
+
+.osrReturnPoint:
+    getterSetterOSRExitReturnPoint(op_get_by_id, size)
+    metadata(t2, t3)
+    valueProfile(OpGetById, t2, r1, r0)
+    return(r1, r0)
+
 end)
 
 
@@ -1465,6 +1472,11 @@
 .opPutByIdSlow:
     callSlowPath(_llint_slow_path_put_by_id)
     dispatch()
+
+.osrReturnPoint:
+    getterSetterOSRExitReturnPoint(op_put_by_id, size)
+    dispatch()
+
 end)
 
 
@@ -1516,10 +1528,17 @@
 .opGetByValSlow:
     callSlowPath(_llint_slow_path_get_by_val)
     dispatch()
+
+.osrReturnPoint:
+    getterSetterOSRExitReturnPoint(op_get_by_val, size)
+    metadata(t2, t3)
+    valueProfile(OpGetByVal, t2, r1, r0)
+    return(r1, r0)
+
 end)
 
 
-macro putByValOp(opcodeName, opcodeStruct)
+macro putByValOp(opcodeName, opcodeStruct, osrExitPoint)
     llintOpWithMetadata(op_%opcodeName%, opcodeStruct, macro (size, get, dispatch, metadata, return)
         macro contiguousPutByVal(storeCallback)
             biaeq t3, -sizeof IndexingHeader + IndexingHeader::u.lengths.publicLength[t0], .outOfBounds
@@ -1607,13 +1626,20 @@
     .opPutByValSlow:
         callSlowPath(_llint_slow_path_%opcodeName%)
         dispatch()
+
+    .osrExitPoint:
+        osrExitPoint(size, dispatch)
     end)
 end
 
 
-putByValOp(put_by_val, OpPutByVal)
+putByValOp(put_by_val, OpPutByVal, macro (size, dispatch)
+.osrReturnPoint:
+    getterSetterOSRExitReturnPoint(op_put_by_val, size)
+    dispatch()
+end)
 
-putByValOp(put_by_val_direct, OpPutByValDirect)
+putByValOp(put_by_val_direct, OpPutByValDirect, macro (a, b) end)
 
 
 macro llintJumpTrueOrFalseOp(opcodeName, opcodeStruct, conditionOp)
@@ -1880,10 +1906,10 @@
         storei CellTag, Callee + TagOffset[t3]
         move t3, sp
         prepareCall(%opcodeStruct%::Metadata::m_callLinkInfo.m_machineCodeTarget[t5], t2, t3, t4, JSEntryPtrTag)
-        callTargetFunction(size, opcodeStruct, dispatch, %opcodeStruct%::Metadata::m_callLinkInfo.m_machineCodeTarget[t5], JSEntryPtrTag)
+        callTargetFunction(opcodeName, size, opcodeStruct, dispatch, %opcodeStruct%::Metadata::m_callLinkInfo.m_machineCodeTarget[t5], JSEntryPtrTag)
 
     .opCallSlow:
-        slowPathForCall(size, opcodeStruct, dispatch, slowPath, prepareCall)
+        slowPathForCall(opcodeName, size, opcodeStruct, dispatch, slowPath, prepareCall)
     end)
 end
 

Modified: trunk/Source/_javascript_Core/llint/LowLevelInterpreter64.asm (250805 => 250806)


--- trunk/Source/_javascript_Core/llint/LowLevelInterpreter64.asm	2019-10-07 23:20:40 UTC (rev 250805)
+++ trunk/Source/_javascript_Core/llint/LowLevelInterpreter64.asm	2019-10-07 23:34:01 UTC (rev 250806)
@@ -1328,7 +1328,6 @@
     dispatch()
 end)
 
-
 llintOpWithMetadata(op_get_by_id, OpGetById, macro (size, get, dispatch, metadata, return)
     metadata(t2, t1)
     loadb OpGetById::Metadata::m_modeMetadata.mode[t2], t1
@@ -1379,6 +1378,13 @@
 .opGetByIdSlow:
     callSlowPath(_llint_slow_path_get_by_id)
     dispatch()
+
+.osrReturnPoint:
+    getterSetterOSRExitReturnPoint(op_get_by_id, size)
+    metadata(t2, t3)
+    valueProfile(OpGetById, t2, r0)
+    return(r0)
+
 end)
 
 
@@ -1451,6 +1457,11 @@
 .opPutByIdSlow:
     callSlowPath(_llint_slow_path_put_by_id)
     dispatch()
+
+.osrReturnPoint:
+    getterSetterOSRExitReturnPoint(op_put_by_id, size)
+    dispatch()
+
 end)
 
 
@@ -1622,10 +1633,17 @@
 .opGetByValSlow:
     callSlowPath(_llint_slow_path_get_by_val)
     dispatch()
+
+.osrReturnPoint:
+    getterSetterOSRExitReturnPoint(op_get_by_val, size)
+    metadata(t5, t2)
+    valueProfile(OpGetByVal, t5, r0)
+    return(r0)
+
 end)
 
 
-macro putByValOp(opcodeName, opcodeStruct)
+macro putByValOp(opcodeName, opcodeStruct, osrExitPoint)
     llintOpWithMetadata(op_%opcodeName%, opcodeStruct, macro (size, get, dispatch, metadata, return)
         macro contiguousPutByVal(storeCallback)
             biaeq t3, -sizeof IndexingHeader + IndexingHeader::u.lengths.publicLength[t0], .outOfBounds
@@ -1713,12 +1731,19 @@
     .opPutByValSlow:
         callSlowPath(_llint_slow_path_%opcodeName%)
         dispatch()
+
+        osrExitPoint(size, dispatch)
+        
     end)
 end
 
-putByValOp(put_by_val, OpPutByVal)
+putByValOp(put_by_val, OpPutByVal, macro (size, dispatch)
+.osrReturnPoint:
+    getterSetterOSRExitReturnPoint(op_put_by_val, size)
+    dispatch()
+end)
 
-putByValOp(put_by_val_direct, OpPutByValDirect)
+putByValOp(put_by_val_direct, OpPutByValDirect, macro (a, b) end)
 
 
 macro llintJumpTrueOrFalseOp(opcodeName, opcodeStruct, conditionOp)
@@ -2007,10 +2032,10 @@
         storei t2, ArgumentCount + PayloadOffset[t3]
         move t3, sp
         prepareCall(%opcodeStruct%::Metadata::m_callLinkInfo.m_machineCodeTarget[t5], t2, t3, t4, JSEntryPtrTag)
-        callTargetFunction(size, opcodeStruct, dispatch, %opcodeStruct%::Metadata::m_callLinkInfo.m_machineCodeTarget[t5], JSEntryPtrTag)
+        callTargetFunction(opcodeName, size, opcodeStruct, dispatch, %opcodeStruct%::Metadata::m_callLinkInfo.m_machineCodeTarget[t5], JSEntryPtrTag)
 
     .opCallSlow:
-        slowPathForCall(size, opcodeStruct, dispatch, slowPath, prepareCall)
+        slowPathForCall(opcodeName, size, opcodeStruct, dispatch, slowPath, prepareCall)
     end)
 end
 

Modified: trunk/Source/_javascript_Core/offlineasm/asm.rb (250805 => 250806)


--- trunk/Source/_javascript_Core/offlineasm/asm.rb	2019-10-07 23:20:40 UTC (rev 250805)
+++ trunk/Source/_javascript_Core/offlineasm/asm.rb	2019-10-07 23:34:01 UTC (rev 250806)
@@ -401,7 +401,7 @@
             lowLevelAST = lowLevelAST.resolve(buildOffsetsMap(lowLevelAST, offsetsList))
             lowLevelAST.validate
             emitCodeInConfiguration(concreteSettings, lowLevelAST, backend) {
-                 $currentSettings = concreteSettings
+                $currentSettings = concreteSettings
                 $asm.inAsm {
                     lowLevelAST.lower(backend)
                 }

Modified: trunk/Source/_javascript_Core/offlineasm/transform.rb (250805 => 250806)


--- trunk/Source/_javascript_Core/offlineasm/transform.rb	2019-10-07 23:20:40 UTC (rev 250805)
+++ trunk/Source/_javascript_Core/offlineasm/transform.rb	2019-10-07 23:34:01 UTC (rev 250806)
@@ -259,7 +259,10 @@
                     match
                 end
             }
-            Label.forName(codeOrigin, name, @definedInFile)
+            result = Label.forName(codeOrigin, name, @definedInFile)
+            result.setGlobal() if global?
+            result.clearExtern unless extern?
+            result
         else
             self
         end
@@ -272,7 +275,10 @@
                 raise "Unknown variable `#{var.originalName}` in substitution at #{codeOrigin}" unless mapping[var]
                 mapping[var].name
             }
-            Label.forName(codeOrigin, name, @definedInFile)
+            result = Label.forName(codeOrigin, name, @definedInFile)
+            result.setGlobal() if global?
+            result.clearExtern unless extern?
+            result
         else
             self
         end

Modified: trunk/Source/_javascript_Core/runtime/OptionsList.h (250805 => 250806)


--- trunk/Source/_javascript_Core/runtime/OptionsList.h	2019-10-07 23:20:40 UTC (rev 250805)
+++ trunk/Source/_javascript_Core/runtime/OptionsList.h	2019-10-07 23:34:01 UTC (rev 250806)
@@ -464,6 +464,7 @@
     v(OptionString, dumpJITMemoryPath, nullptr, Restricted, nullptr) \
     v(Double, dumpJITMemoryFlushInterval, 10, Restricted, "Maximum time in between flushes of the JIT memory dump in seconds.") \
     v(Bool, useUnlinkedCodeBlockJettisoning, false, Normal, "If true, UnlinkedCodeBlock can be jettisoned.") \
+    v(Bool, forceOSRExitToLLInt, false, Normal, "If true, we always exit to the LLInt. If false, we exit to whatever is most convenient.") \
 
 enum OptionEquivalence {
     SameOption,

Modified: trunk/Tools/ChangeLog (250805 => 250806)


--- trunk/Tools/ChangeLog	2019-10-07 23:20:40 UTC (rev 250805)
+++ trunk/Tools/ChangeLog	2019-10-07 23:34:01 UTC (rev 250806)
@@ -1,3 +1,12 @@
+2019-10-07  Saam Barati  <sbar...@apple.com>
+
+        Allow OSR exit to the LLInt
+        https://bugs.webkit.org/show_bug.cgi?id=197993
+
+        Reviewed by Tadeu Zagallo.
+
+        * Scripts/run-jsc-stress-tests:
+
 2019-10-07  Kate Cheney  <katherine_che...@apple.com>
 
         Domain relationships in the ITP Database should be inserted in a single query and ignore repeat insert attempts. (202604)

Modified: trunk/Tools/Scripts/run-jsc-stress-tests (250805 => 250806)


--- trunk/Tools/Scripts/run-jsc-stress-tests	2019-10-07 23:20:40 UTC (rev 250805)
+++ trunk/Tools/Scripts/run-jsc-stress-tests	2019-10-07 23:34:01 UTC (rev 250806)
@@ -495,6 +495,7 @@
 B3O0_OPTIONS = ["--maxDFGNodesInBasicBlockForPreciseAnalysis=100", "--defaultB3OptLevel=0"]
 FTL_OPTIONS = ["--useFTLJIT=true"]
 PROBE_OSR_EXIT_OPTION = ["--useProbeOSRExit=true"]
+FORCE_LLINT_EXIT_OPTIONS = ["--forceOSRExitToLLInt=true"]
 
 require_relative "webkitruby/jsc-stress-test-writer-#{$testWriter}"
 
@@ -708,7 +709,7 @@
 end
 
 def runFTLNoCJITB3O0(*optionalTestSpecificOptions)
-    run("ftl-no-cjit-b3o0", "--useArrayAllocationProfiling=false", "--forcePolyProto=true", *(FTL_OPTIONS + NO_CJIT_OPTIONS + B3O0_OPTIONS + optionalTestSpecificOptions))
+    run("ftl-no-cjit-b3o0", "--useArrayAllocationProfiling=false", "--forcePolyProto=true", *(FTL_OPTIONS + NO_CJIT_OPTIONS + B3O0_OPTIONS + FORCE_LLINT_EXIT_OPTIONS + optionalTestSpecificOptions))
 end
 
 def runFTLNoCJITValidate(*optionalTestSpecificOptions)
@@ -728,7 +729,7 @@
 end
 
 def runDFGEager(*optionalTestSpecificOptions)
-    run("dfg-eager", *(EAGER_OPTIONS + COLLECT_CONTINUOUSLY_OPTIONS + PROBE_OSR_EXIT_OPTION + optionalTestSpecificOptions))
+    run("dfg-eager", *(EAGER_OPTIONS + COLLECT_CONTINUOUSLY_OPTIONS + PROBE_OSR_EXIT_OPTION + FORCE_LLINT_EXIT_OPTIONS + optionalTestSpecificOptions))
 end
 
 def runDFGEagerNoCJITValidate(*optionalTestSpecificOptions)
@@ -745,7 +746,7 @@
 end
 
 def runFTLEagerNoCJITValidate(*optionalTestSpecificOptions)
-    run("ftl-eager-no-cjit", "--validateGraph=true", "--airForceIRCAllocator=true", *(FTL_OPTIONS + NO_CJIT_OPTIONS + EAGER_OPTIONS + COLLECT_CONTINUOUSLY_OPTIONS + optionalTestSpecificOptions))
+    run("ftl-eager-no-cjit", "--validateGraph=true", "--airForceIRCAllocator=true", *(FTL_OPTIONS + NO_CJIT_OPTIONS + EAGER_OPTIONS + COLLECT_CONTINUOUSLY_OPTIONS + FORCE_LLINT_EXIT_OPTIONS + optionalTestSpecificOptions))
 end
 
 def runFTLEagerNoCJITB3O1(*optionalTestSpecificOptions)
_______________________________________________
webkit-changes mailing list
webkit-changes@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-changes

Reply via email to