Title: [150657] branches/dfgFourthTier/Source/_javascript_Core
Revision
150657
Author
fpi...@apple.com
Date
2013-05-24 14:06:07 -0700 (Fri, 24 May 2013)

Log Message

fourthTier: add heuristics to reduce the likelihood of a trivially inlineable function being independently compiled by the concurrent JIT
https://bugs.webkit.org/show_bug.cgi?id=116557

Reviewed by Geoffrey Garen.
        
This introduces a fairly comprehensive mechanism for preventing trivially inlineable
functions from being compiled independently of all of the things into which they end
up being inlined.
        
The trick is CodeBlock::m_shouldAlwaysBeInlined, or SABI for short (that's what the
debug logging calls it). A SABI function is one that we currently believe should
never be DFG optimized because it should always be inlined into the functions that
call it. SABI follows "innocent until proven guilty": all functions start out SABI
and have SABI set to false if we see proof that that function may be called in some
possibly non-inlineable way. So long as a function is SABI, it will not tier up to
the DFG: cti_optimize will perpetually postpone its optimization. Because SABI has
such a severe effect, we make the burden of proof of guilt quite low. SABI gets
cleared if any of the following happen:
        
- You get called from native code (either through CallData or CachedCall).
        
- You get called from an eval, since eval code takes a long time to get DFG
  optimized.
        
- You get called from global code, since often global code doesn't tier-up since
  it's run-once.
        
- You get called recursively, where recursion is detected by a stack walk of depth
  Options::maximumInliningDepth().
        
- You get called through an unlinked virtual call.
        
- You get called from DFG code, since if the caller was already DFG optimized and
  didn't inline you then obviously, you might not get inlined.
        
- You've tiered up to the baseline JIT and you get called from the interpreter.
  The idea here is that this kind of ensures that you stay SABI only if you're
  called no more frequently than any of your callers.
        
- You get called from a code block that isn't a DFG candidate.
        
- You aren't an inlining candidate.
        
Most of the heuristics for SABI are in CodeBlock::noticeIncomingCall().
        
This is neutral on SunSpider and V8Spider, and appears to be a slight speed-up on
V8v7, which was previously adversely affected by concurrent compilation. I also
confirmed that for example on V8/richards, it dramatically reduces the number of
code blocks that get DFG compiled. It is a speed-up on those V8v7 benchmarks that
saw regressions from concurrent compilation.
        
* bytecode/CodeBlock.cpp:
(JSC::CodeBlock::dumpAssumingJITType):
(JSC::CodeBlock::CodeBlock):
(JSC::CodeBlock::linkIncomingCall):
(JSC):
(JSC::CodeBlock::noticeIncomingCall):
* bytecode/CodeBlock.h:
(CodeBlock):
* dfg/DFGCapabilities.h:
(JSC::DFG::mightInlineFunction):
(DFG):
* dfg/DFGPlan.cpp:
(JSC::DFG::Plan::compileInThread):
* dfg/DFGRepatch.cpp:
(JSC::DFG::dfgLinkFor):
* interpreter/Interpreter.cpp:
(JSC::Interpreter::executeCall):
(JSC::Interpreter::executeConstruct):
(JSC::Interpreter::prepareForRepeatCall):
* jit/JIT.cpp:
(JSC::JIT::privateCompile):
(JSC::JIT::linkFor):
* jit/JIT.h:
(JIT):
* jit/JITStubs.cpp:
(JSC::DEFINE_STUB_FUNCTION):
(JSC::lazyLinkFor):
* llint/LLIntSlowPaths.cpp:
(JSC::LLInt::setUpCall):

Modified Paths

Diff

Modified: branches/dfgFourthTier/Source/_javascript_Core/ChangeLog (150656 => 150657)


--- branches/dfgFourthTier/Source/_javascript_Core/ChangeLog	2013-05-24 20:26:00 UTC (rev 150656)
+++ branches/dfgFourthTier/Source/_javascript_Core/ChangeLog	2013-05-24 21:06:07 UTC (rev 150657)
@@ -1,5 +1,88 @@
 2013-05-23  Filip Pizlo  <fpi...@apple.com>
 
+        fourthTier: add heuristics to reduce the likelihood of a trivially inlineable function being independently compiled by the concurrent JIT
+        https://bugs.webkit.org/show_bug.cgi?id=116557
+
+        Reviewed by Geoffrey Garen.
+        
+        This introduces a fairly comprehensive mechanism for preventing trivially inlineable
+        functions from being compiled independently of all of the things into which they end
+        up being inlined.
+        
+        The trick is CodeBlock::m_shouldAlwaysBeInlined, or SABI for short (that's what the
+        debug logging calls it). A SABI function is one that we currently believe should
+        never be DFG optimized because it should always be inlined into the functions that
+        call it. SABI follows "innocent until proven guilty": all functions start out SABI
+        and have SABI set to false if we see proof that that function may be called in some
+        possibly non-inlineable way. So long as a function is SABI, it will not tier up to
+        the DFG: cti_optimize will perpetually postpone its optimization. Because SABI has
+        such a severe effect, we make the burden of proof of guilt quite low. SABI gets
+        cleared if any of the following happen:
+        
+        - You get called from native code (either through CallData or CachedCall).
+        
+        - You get called from an eval, since eval code takes a long time to get DFG
+          optimized.
+        
+        - You get called from global code, since often global code doesn't tier-up since
+          it's run-once.
+        
+        - You get called recursively, where recursion is detected by a stack walk of depth
+          Options::maximumInliningDepth().
+        
+        - You get called through an unlinked virtual call.
+        
+        - You get called from DFG code, since if the caller was already DFG optimized and
+          didn't inline you then obviously, you might not get inlined.
+        
+        - You've tiered up to the baseline JIT and you get called from the interpreter.
+          The idea here is that this kind of ensures that you stay SABI only if you're
+          called no more frequently than any of your callers.
+        
+        - You get called from a code block that isn't a DFG candidate.
+        
+        - You aren't an inlining candidate.
+        
+        Most of the heuristics for SABI are in CodeBlock::noticeIncomingCall().
+        
+        This is neutral on SunSpider and V8Spider, and appears to be a slight speed-up on
+        V8v7, which was previously adversely affected by concurrent compilation. I also
+        confirmed that for example on V8/richards, it dramatically reduces the number of
+        code blocks that get DFG compiled. It is a speed-up on those V8v7 benchmarks that
+        saw regressions from concurrent compilation.
+        
+        * bytecode/CodeBlock.cpp:
+        (JSC::CodeBlock::dumpAssumingJITType):
+        (JSC::CodeBlock::CodeBlock):
+        (JSC::CodeBlock::linkIncomingCall):
+        (JSC):
+        (JSC::CodeBlock::noticeIncomingCall):
+        * bytecode/CodeBlock.h:
+        (CodeBlock):
+        * dfg/DFGCapabilities.h:
+        (JSC::DFG::mightInlineFunction):
+        (DFG):
+        * dfg/DFGPlan.cpp:
+        (JSC::DFG::Plan::compileInThread):
+        * dfg/DFGRepatch.cpp:
+        (JSC::DFG::dfgLinkFor):
+        * interpreter/Interpreter.cpp:
+        (JSC::Interpreter::executeCall):
+        (JSC::Interpreter::executeConstruct):
+        (JSC::Interpreter::prepareForRepeatCall):
+        * jit/JIT.cpp:
+        (JSC::JIT::privateCompile):
+        (JSC::JIT::linkFor):
+        * jit/JIT.h:
+        (JIT):
+        * jit/JITStubs.cpp:
+        (JSC::DEFINE_STUB_FUNCTION):
+        (JSC::lazyLinkFor):
+        * llint/LLIntSlowPaths.cpp:
+        (JSC::LLInt::setUpCall):
+
+2013-05-23  Filip Pizlo  <fpi...@apple.com>
+
         fourthTier: rationalize DFG::CapabilityLevel and DFGCapabilities.[h|cpp]
         https://bugs.webkit.org/show_bug.cgi?id=116696
 

Modified: branches/dfgFourthTier/Source/_javascript_Core/bytecode/CodeBlock.cpp (150656 => 150657)


--- branches/dfgFourthTier/Source/_javascript_Core/bytecode/CodeBlock.cpp	2013-05-24 20:26:00 UTC (rev 150656)
+++ branches/dfgFourthTier/Source/_javascript_Core/bytecode/CodeBlock.cpp	2013-05-24 21:06:07 UTC (rev 150657)
@@ -113,6 +113,8 @@
     out.print(inferredName(), "#", hash(), ":[", RawPointer(this), "->", RawPointer(ownerExecutable()), ", ", jitType, codeType());
     if (codeType() == FunctionCode)
         out.print(specializationKind());
+    if (m_shouldAlwaysBeInlined)
+        out.print(" (SABI)");
     out.print("]");
 }
 
@@ -1576,6 +1578,7 @@
     , m_numCalleeRegisters(other.m_numCalleeRegisters)
     , m_numVars(other.m_numVars)
     , m_isConstructor(other.m_isConstructor)
+    , m_shouldAlwaysBeInlined(true)
     , m_unlinkedCode(*other.m_vm, other.m_ownerExecutable.get(), other.m_unlinkedCode.get())
     , m_ownerExecutable(*other.m_vm, other.m_ownerExecutable.get(), other.m_ownerExecutable.get())
     , m_vm(other.m_vm)
@@ -1622,6 +1625,7 @@
     , m_numCalleeRegisters(unlinkedCodeBlock->m_numCalleeRegisters)
     , m_numVars(unlinkedCodeBlock->m_numVars)
     , m_isConstructor(unlinkedCodeBlock->isConstructor())
+    , m_shouldAlwaysBeInlined(true)
     , m_unlinkedCode(globalObject->vm(), ownerExecutable, unlinkedCodeBlock)
     , m_ownerExecutable(globalObject->vm(), ownerExecutable, ownerExecutable)
     , m_vm(unlinkedCodeBlock->vm())
@@ -2569,6 +2573,12 @@
     }
 }
 
+void CodeBlock::linkIncomingCall(ExecState* callerFrame, CallLinkInfo* incoming)
+{
+    noticeIncomingCall(callerFrame);
+    m_incomingCalls.push(incoming);
+}
+
 void CodeBlock::unlinkIncomingCalls()
 {
 #if ENABLE(LLINT)
@@ -2584,6 +2594,12 @@
 #endif // ENABLE(JIT)
 
 #if ENABLE(LLINT)
+void CodeBlock::linkIncomingCall(ExecState* callerFrame, LLIntCallLinkInfo* incoming)
+{
+    noticeIncomingCall(callerFrame);
+    m_incomingLLIntCalls.push(incoming);
+}
+
 Instruction* CodeBlock::adjustPCIfAtCallSite(Instruction* potentialReturnPC)
 {
     ASSERT(potentialReturnPC);
@@ -2924,6 +2940,70 @@
     return jsCast<FunctionExecutable*>(codeOrigin.inlineCallFrame->executable.get())->generatedBytecode().globalObject();
 }
 
+void CodeBlock::noticeIncomingCall(ExecState* callerFrame)
+{
+    CodeBlock* callerCodeBlock = callerFrame->codeBlock();
+    
+    if (Options::verboseOSR())
+        dataLog("Noticing call link from ", *callerCodeBlock, " to ", *this, "\n");
+    
+    if (!m_shouldAlwaysBeInlined)
+        return;
+    
+    if (!hasBaselineJITProfiling())
+        return;
+    
+    if (!DFG::mightInlineFunction(this))
+        return;
+    
+    if (!canInline(m_capabilityLevelState))
+        return;
+    
+    if (callerCodeBlock->jitType() == JITCode::InterpreterThunk) {
+        // If the caller is still in the interpreter, then we can't expect inlining to
+        // happen anytime soon. Assume it's profitable to optimize it separately. This
+        // ensures that a function is SABI only if it is called no more frequently than
+        // any of its callers.
+        m_shouldAlwaysBeInlined = false;
+        if (Options::verboseOSR())
+            dataLog("    Marking SABI because caller is in LLInt.\n");
+        return;
+    }
+    
+    if (callerCodeBlock->codeType() != FunctionCode) {
+        // If the caller is either eval or global code, assume that that won't be
+        // optimized anytime soon. For eval code this is particularly true since we
+        // delay eval optimization by a *lot*.
+        m_shouldAlwaysBeInlined = false;
+        if (Options::verboseOSR())
+            dataLog("    Marking SABI because caller is not a function.\n");
+        return;
+    }
+    
+    ExecState* frame = callerFrame;
+    for (unsigned i = Options::maximumInliningDepth(); i--; frame = frame->callerFrame()) {
+        if (frame->hasHostCallFrameFlag())
+            break;
+        if (frame->codeBlock() == this) {
+            // Recursive calls won't be inlined.
+            if (Options::verboseOSR())
+                dataLog("    Marking SABI because recursion was detected.\n");
+            m_shouldAlwaysBeInlined = false;
+            return;
+        }
+    }
+    
+    RELEASE_ASSERT(callerCodeBlock->m_capabilityLevelState != DFG::CapabilityLevelNotSet);
+    
+    if (canCompile(callerCodeBlock->m_capabilityLevelState))
+        return;
+    
+    if (Options::verboseOSR())
+        dataLog("    Marking SABI because the caller is not a DFG candidate.\n");
+    
+    m_shouldAlwaysBeInlined = false;
+}
+
 unsigned CodeBlock::reoptimizationRetryCounter() const
 {
     ASSERT(m_reoptimizationRetryCounter <= Options::reoptimizationRetryCounterMax());

Modified: branches/dfgFourthTier/Source/_javascript_Core/bytecode/CodeBlock.h (150656 => 150657)


--- branches/dfgFourthTier/Source/_javascript_Core/bytecode/CodeBlock.h	2013-05-24 20:26:00 UTC (rev 150656)
+++ branches/dfgFourthTier/Source/_javascript_Core/bytecode/CodeBlock.h	2013-05-24 21:06:07 UTC (rev 150657)
@@ -214,13 +214,8 @@
 
     void unlinkCalls();
         
-    bool hasIncomingCalls() { return m_incomingCalls.begin() != m_incomingCalls.end(); }
+    void linkIncomingCall(ExecState* callerFrame, CallLinkInfo*);
         
-    void linkIncomingCall(CallLinkInfo* incoming)
-    {
-        m_incomingCalls.push(incoming);
-    }
-        
     bool isIncomingCallAlreadyLinked(CallLinkInfo* incoming)
     {
         return m_incomingCalls.isOnList(incoming);
@@ -228,10 +223,7 @@
 #endif // ENABLE(JIT)
 
 #if ENABLE(LLINT)
-    void linkIncomingCall(LLIntCallLinkInfo* incoming)
-    {
-        m_incomingLLIntCalls.push(incoming);
-    }
+    void linkIncomingCall(ExecState* callerFrame, LLIntCallLinkInfo*);
 #endif // ENABLE(LLINT)
         
     void unlinkIncomingCalls();
@@ -907,6 +899,8 @@
     // concurrent compilation threads finish what they're doing.
     ConcurrentJITLock m_lock;
     
+    bool m_shouldAlwaysBeInlined;
+    
 protected:
 #if ENABLE(JIT)
     virtual CompilationResult jitCompileImpl(ExecState*) = 0;
@@ -916,7 +910,9 @@
 
 private:
     friend class DFGCodeBlocks;
-        
+    
+    void noticeIncomingCall(ExecState* callerFrame);
+    
     double optimizationThresholdScalingFactor();
 
 #if ENABLE(JIT)

Modified: branches/dfgFourthTier/Source/_javascript_Core/dfg/DFGCapabilities.h (150656 => 150657)


--- branches/dfgFourthTier/Source/_javascript_Core/dfg/DFGCapabilities.h	2013-05-24 20:26:00 UTC (rev 150656)
+++ branches/dfgFourthTier/Source/_javascript_Core/dfg/DFGCapabilities.h	2013-05-24 21:06:07 UTC (rev 150657)
@@ -118,6 +118,11 @@
     return mightInlineFunctionForConstruct(codeBlock);
 }
 
+inline bool mightInlineFunction(CodeBlock* codeBlock)
+{
+    return mightInlineFunctionFor(codeBlock, codeBlock->specializationKind());
+}
+
 inline bool canInlineFunctionFor(CodeBlock* codeBlock, CodeSpecializationKind kind, bool isClosureCall)
 {
     if (isClosureCall) {

Modified: branches/dfgFourthTier/Source/_javascript_Core/dfg/DFGPlan.cpp (150656 => 150657)


--- branches/dfgFourthTier/Source/_javascript_Core/dfg/DFGPlan.cpp	2013-05-24 20:26:00 UTC (rev 150656)
+++ branches/dfgFourthTier/Source/_javascript_Core/dfg/DFGPlan.cpp	2013-05-24 21:06:07 UTC (rev 150657)
@@ -116,7 +116,7 @@
             pathName = "FTL";
             break;
         }
-        dataLog("Compiled ", *codeBlock, " with ", pathName, " in ", currentTimeMS() - before, " ms.\n");
+        dataLog("Optimized ", *codeBlock->alternative(), " with ", pathName, " in ", currentTimeMS() - before, " ms.\n");
     }
 }
 

Modified: branches/dfgFourthTier/Source/_javascript_Core/dfg/DFGRepatch.cpp (150656 => 150657)


--- branches/dfgFourthTier/Source/_javascript_Core/dfg/DFGRepatch.cpp	2013-05-24 20:26:00 UTC (rev 150656)
+++ branches/dfgFourthTier/Source/_javascript_Core/dfg/DFGRepatch.cpp	2013-05-24 21:06:07 UTC (rev 150657)
@@ -1129,6 +1129,10 @@
 {
     ASSERT(!callLinkInfo.stub);
     
+    // If you're being call-linked from a DFG caller then you obviously didn't get inlined.
+    if (calleeCodeBlock)
+        calleeCodeBlock->m_shouldAlwaysBeInlined = false;
+    
     CodeBlock* callerCodeBlock = exec->callerFrame()->codeBlock();
     VM* vm = callerCodeBlock->vm();
     
@@ -1140,7 +1144,7 @@
     repatchBuffer.relink(callLinkInfo.hotPathOther, codePtr);
     
     if (calleeCodeBlock)
-        calleeCodeBlock->linkIncomingCall(&callLinkInfo);
+        calleeCodeBlock->linkIncomingCall(exec->callerFrame(), &callLinkInfo);
     
     if (kind == CodeForCall) {
         repatchBuffer.relink(callLinkInfo.callReturnLocation, vm->getCTIStub(linkClosureCallThunkGenerator).code());

Modified: branches/dfgFourthTier/Source/_javascript_Core/interpreter/Interpreter.cpp (150656 => 150657)


--- branches/dfgFourthTier/Source/_javascript_Core/interpreter/Interpreter.cpp	2013-05-24 20:26:00 UTC (rev 150656)
+++ branches/dfgFourthTier/Source/_javascript_Core/interpreter/Interpreter.cpp	2013-05-24 21:06:07 UTC (rev 150657)
@@ -1029,6 +1029,7 @@
         }
         newCodeBlock = &callData.js.functionExecutable->generatedBytecodeForCall();
         ASSERT(!!newCodeBlock);
+        newCodeBlock->m_shouldAlwaysBeInlined = false;
     } else
         newCodeBlock = 0;
 
@@ -1104,6 +1105,7 @@
         }
         newCodeBlock = &constructData.js.functionExecutable->generatedBytecodeForConstruct();
         ASSERT(!!newCodeBlock);
+        newCodeBlock->m_shouldAlwaysBeInlined = false;
     } else
         newCodeBlock = 0;
 
@@ -1169,6 +1171,7 @@
         return CallFrameClosure();
     }
     CodeBlock* newCodeBlock = &functionExecutable->generatedBytecodeForCall();
+    newCodeBlock->m_shouldAlwaysBeInlined = false;
 
     size_t argsCount = argumentCountIncludingThis;
 

Modified: branches/dfgFourthTier/Source/_javascript_Core/jit/JIT.cpp (150656 => 150657)


--- branches/dfgFourthTier/Source/_javascript_Core/jit/JIT.cpp	2013-05-24 20:26:00 UTC (rev 150656)
+++ branches/dfgFourthTier/Source/_javascript_Core/jit/JIT.cpp	2013-05-24 21:06:07 UTC (rev 150657)
@@ -35,8 +35,7 @@
 #endif
 
 #include "CodeBlock.h"
-#include <wtf/CryptographicallyRandomNumber.h>
-#include "DFGNode.h" // for DFG_SUCCESS_STATS
+#include "DFGCapabilities.h"
 #include "Interpreter.h"
 #include "JITInlines.h"
 #include "JITStubCall.h"
@@ -47,6 +46,7 @@
 #include "RepatchBuffer.h"
 #include "ResultType.h"
 #include "SamplingTool.h"
+#include <wtf/CryptographicallyRandomNumber.h>
 
 using namespace std;
 
@@ -624,6 +624,18 @@
         RELEASE_ASSERT_NOT_REACHED();
         break;
     }
+    
+    switch (m_codeBlock->codeType()) {
+    case GlobalCode:
+    case EvalCode:
+        m_codeBlock->m_shouldAlwaysBeInlined = false;
+        break;
+    case FunctionCode:
+        // We could have already set it to false because we detected an uninlineable call.
+        // Don't override that observation.
+        m_codeBlock->m_shouldAlwaysBeInlined &= canInline(level) && DFG::mightInlineFunction(m_codeBlock);
+        break;
+    }
 #endif
     
     if (Options::showDisassembly() || m_vm->m_perBytecodeProfiler)
@@ -707,6 +719,7 @@
         jump(functionBody);
 
         arityCheck = label();
+        store8(TrustedImm32(0), &m_codeBlock->m_shouldAlwaysBeInlined);
         preserveReturnAddressAfterCall(regT2);
         emitPutToCallFrameHeader(regT2, JSStack::ReturnPC);
         emitPutImmediateToCallFrameHeader(m_codeBlock, JSStack::CodeBlock);
@@ -842,7 +855,7 @@
     return adoptRef(new DirectJITCode(result, JITCode::BaselineJIT));
 }
 
-void JIT::linkFor(JSFunction* callee, CodeBlock* callerCodeBlock, CodeBlock* calleeCodeBlock, JIT::CodePtr code, CallLinkInfo* callLinkInfo, VM* vm, CodeSpecializationKind kind)
+void JIT::linkFor(ExecState* exec, JSFunction* callee, CodeBlock* callerCodeBlock, CodeBlock* calleeCodeBlock, JIT::CodePtr code, CallLinkInfo* callLinkInfo, VM* vm, CodeSpecializationKind kind)
 {
     RepatchBuffer repatchBuffer(callerCodeBlock);
 
@@ -852,7 +865,7 @@
     repatchBuffer.relink(callLinkInfo->hotPathOther, code);
 
     if (calleeCodeBlock)
-        calleeCodeBlock->linkIncomingCall(callLinkInfo);
+        calleeCodeBlock->linkIncomingCall(exec, callLinkInfo);
 
     // Patch the slow patch so we do not continue to try to link.
     if (kind == CodeForCall) {

Modified: branches/dfgFourthTier/Source/_javascript_Core/jit/JIT.h (150656 => 150657)


--- branches/dfgFourthTier/Source/_javascript_Core/jit/JIT.h	2013-05-24 20:26:00 UTC (rev 150656)
+++ branches/dfgFourthTier/Source/_javascript_Core/jit/JIT.h	2013-05-24 21:06:07 UTC (rev 150657)
@@ -393,7 +393,7 @@
             return jit.privateCompilePatchGetArrayLength(returnAddress);
         }
 
-        static void linkFor(JSFunction* callee, CodeBlock* callerCodeBlock, CodeBlock* calleeCodeBlock, CodePtr, CallLinkInfo*, VM*, CodeSpecializationKind);
+        static void linkFor(ExecState*, JSFunction* callee, CodeBlock* callerCodeBlock, CodeBlock* calleeCodeBlock, CodePtr, CallLinkInfo*, VM*, CodeSpecializationKind);
         static void linkSlowCall(CodeBlock* callerCodeBlock, CallLinkInfo*);
 
     private:

Modified: branches/dfgFourthTier/Source/_javascript_Core/jit/JITStubs.cpp (150656 => 150657)


--- branches/dfgFourthTier/Source/_javascript_Core/jit/JITStubs.cpp	2013-05-24 20:26:00 UTC (rev 150656)
+++ branches/dfgFourthTier/Source/_javascript_Core/jit/JITStubs.cpp	2013-05-24 21:06:07 UTC (rev 150657)
@@ -907,6 +907,12 @@
     CodeBlock* codeBlock = callFrame->codeBlock();
     unsigned bytecodeIndex = stackFrame.args[0].int32();
 
+    if (bytecodeIndex) {
+        // If we're attempting to OSR from a loop, assume that this should be
+        // separately optimized.
+        codeBlock->m_shouldAlwaysBeInlined = false;
+    }
+    
     if (Options::verboseOSR()) {
         dataLog(
             *codeBlock, ": Entered optimize with bytecodeIndex = ", bytecodeIndex,
@@ -923,10 +929,18 @@
     if (!codeBlock->checkIfOptimizationThresholdReached()) {
         codeBlock->updateAllPredictions();
         if (Options::verboseOSR())
-            dataLog("Choosing not to optimize ", *codeBlock, " yet.\n");
+            dataLog("Choosing not to optimize ", *codeBlock, " yet, because the threshold hasn't been reached.\n");
         return;
     }
     
+    if (codeBlock->m_shouldAlwaysBeInlined) {
+        codeBlock->updateAllPredictions();
+        codeBlock->optimizeAfterWarmUp();
+        if (Options::verboseOSR())
+            dataLog("Choosing not to optimize ", *codeBlock, " yet, because m_shouldAlwaysBeInlined == true.\n");
+        return;
+    }
+    
     // We cannot be in the process of asynchronous compilation and also have an optimized
     // replacement.
     ASSERT(
@@ -979,8 +993,12 @@
         //   code block. Obviously that's unfortunate and we'd rather not have that
         //   happen, but it can happen, and if it did then the jettisoning logic will
         //   have set our threshold appropriately and we have nothing left to do.
-        if (!codeBlock->hasOptimizedReplacement())
+        if (!codeBlock->hasOptimizedReplacement()) {
+            codeBlock->updateAllPredictions();
+            if (Options::verboseOSR())
+                dataLog("Code block ", *codeBlock, " was compiled but it doesn't have an optimized replacement.\n");
             return;
+        }
     } else if (codeBlock->hasOptimizedReplacement()) {
         if (Options::verboseOSR())
             dataLog("Considering OSR ", *codeBlock, " -> ", *codeBlock->replacement(), ".\n");
@@ -1011,7 +1029,7 @@
             if (Options::verboseOSR()) {
                 dataLog(
                     "Delaying optimization for ", *codeBlock,
-                    " (in loop) because of insufficient profiling.\n");
+                    " because of insufficient profiling.\n");
             }
             return;
         }
@@ -1279,12 +1297,12 @@
         else
             codePtr = functionExecutable->generatedJITCodeFor(kind)->addressForCall();
     }
-
+    
     ConcurrentJITLocker locker(callFrame->callerFrame()->codeBlock()->m_lock);
     if (!callLinkInfo->seenOnce())
         callLinkInfo->setSeen();
     else
-        JIT::linkFor(callee, callFrame->callerFrame()->codeBlock(), codeBlock, codePtr, callLinkInfo, &callFrame->vm(), kind);
+        JIT::linkFor(callFrame->callerFrame(), callee, callFrame->callerFrame()->codeBlock(), codeBlock, codePtr, callLinkInfo, &callFrame->vm(), kind);
 
     return codePtr.executableAddress();
 }

Modified: branches/dfgFourthTier/Source/_javascript_Core/llint/LLIntSlowPaths.cpp (150656 => 150657)


--- branches/dfgFourthTier/Source/_javascript_Core/llint/LLIntSlowPaths.cpp	2013-05-24 20:26:00 UTC (rev 150656)
+++ branches/dfgFourthTier/Source/_javascript_Core/llint/LLIntSlowPaths.cpp	2013-05-24 21:06:07 UTC (rev 150657)
@@ -1436,16 +1436,18 @@
     
     if (!LLINT_ALWAYS_ACCESS_SLOW && callLinkInfo) {
         ExecState* execCaller = execCallee->callerFrame();
+        
+        CodeBlock* callerCodeBlock = execCaller->codeBlock();
 
-        ConcurrentJITLocker locker(execCaller->codeBlock()->m_lock);
+        ConcurrentJITLocker locker(callerCodeBlock->m_lock);
         
         if (callLinkInfo->isOnList())
             callLinkInfo->remove();
-        callLinkInfo->callee.set(vm, execCaller->codeBlock()->ownerExecutable(), callee);
-        callLinkInfo->lastSeenCallee.set(vm, execCaller->codeBlock()->ownerExecutable(), callee);
+        callLinkInfo->callee.set(vm, callerCodeBlock->ownerExecutable(), callee);
+        callLinkInfo->lastSeenCallee.set(vm, callerCodeBlock->ownerExecutable(), callee);
         callLinkInfo->machineCodeTarget = codePtr;
         if (codeBlock)
-            codeBlock->linkIncomingCall(callLinkInfo);
+            codeBlock->linkIncomingCall(execCaller, callLinkInfo);
     }
 
     LLINT_CALL_RETURN(execCallee, pc, codePtr.executableAddress());
_______________________________________________
webkit-changes mailing list
webkit-changes@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-changes

Reply via email to