Title: [199069] trunk
Revision
199069
Author
fpi...@apple.com
Date
2016-04-05 12:58:04 -0700 (Tue, 05 Apr 2016)

Log Message

PolymorphicAccess should have a MegamorphicLoad case
https://bugs.webkit.org/show_bug.cgi?id=156182

Reviewed by Geoffrey Garen and Keith Miller.

Source/_javascript_Core:

This introduces a new case to PolymorphicAccess called MegamorphicLoad. This inlines the lookup in
the PropertyTable. It's cheaper than switching on a huge number of cases and it's cheaper than
calling into C++ to do the same job - particularly since inlining the lookup into an access means
that we can precompute the hash code.

When writing the inline code for the hashtable lookup, I found that our hashing algorithm was not
optimal. It used a double-hashing method for reducing collision pathologies. This is great for
improving the performance of some worst-case scenarios. But this misses the point of a hashtable: we
want to optimize the average-case performance. When optimizing for average-case, we can choose to
either focus on maximizing the likelihood of the fast case happening, or to minimize the cost of the
worst-case, or to minimize the cost of the fast case. Even a very basic hashtable will achieve a high
probability of hitting the fast case. So, doing work to reduce the likelihood of a worst-case
pathology only makes sense if it also preserves the good performance of the fast case, or reduces the
likelihood of the worst-case by so much that it's a win for the average case even with a slow-down in
the fast case.

I don't believe, based on looking at how the double-hashing is implemented, that it's possible that
this preserves the good performance of the fast case. It requires at least one more value to be live
around the loop, and dramatically increases the register pressure at key points inside the loop. The
biggest offender is the doubleHash() method itself. There is no getting around how bad this is: if
the compiler live-range-splits that method to death to avoid degrading register pressure elsewhere
then we will pay a steep price anytime we take the second iteration around the loop; but if the
compiler doesn't split around the call then the hashtable lookup fast path will be full of spills on
some architectures (I performed biological register allocation and found that I needed 9 registers
for complete lookup, while x86-64 has only 6 callee-saves; OTOH ARM64 has 10 callee-saves so it might
be better off).

Hence, this patch changes the hashtable lookup to use simple linear probing. This was not a slow-down
on anything, and it made MegamorphicLoad much more sensible since it is less likely to have to spill.

There are some other small changes in this patch, like rationalizing the IC's choice between giving
up after a repatch (i.e. never trying again) and just pretending that nothing happened (so we can
try to repatch again in the future). It looked like the code in Repatch.cpp was set up to be able to
choose between those options, but we weren't fully taking advantage of it because the
regenerateWithCase() method just returned null for any failure, and didn't say whether it was the
sort of failure that renders the inline cache unrepatchable (like memory allocation failure). Now
this is all made explicit. I wanted to make sure this change happened in this patch since the
MegamorphicLoad code automagically generates a MegamorphicLoad case by coalescing other cases. Since
this is intended to avoid blowing out the cache and making it unrepatchable, I wanted to make sure
that the rules for giving up were something that made sense to me.
        
This is a big win on microbenchmarks. It's neutral on traditional JS benchmarks. It's a slight
speed-up for page loading, because many real websites like to have megamorphic property accesses.

* bytecode/PolymorphicAccess.cpp:
(JSC::AccessGenerationResult::dump):
(JSC::AccessGenerationState::addWatchpoint):
(JSC::AccessCase::get):
(JSC::AccessCase::megamorphicLoad):
(JSC::AccessCase::replace):
(JSC::AccessCase::guardedByStructureCheck):
(JSC::AccessCase::couldStillSucceed):
(JSC::AccessCase::canBeReplacedByMegamorphicLoad):
(JSC::AccessCase::canReplace):
(JSC::AccessCase::generateWithGuard):
(JSC::AccessCase::generate):
(JSC::PolymorphicAccess::PolymorphicAccess):
(JSC::PolymorphicAccess::~PolymorphicAccess):
(JSC::PolymorphicAccess::regenerateWithCases):
(JSC::PolymorphicAccess::regenerateWithCase):
(WTF::printInternal):
* bytecode/PolymorphicAccess.h:
(JSC::AccessCase::isGet):
(JSC::AccessCase::isPut):
(JSC::AccessCase::isIn):
(JSC::AccessGenerationResult::AccessGenerationResult):
(JSC::AccessGenerationResult::operator==):
(JSC::AccessGenerationResult::operator!=):
(JSC::AccessGenerationResult::operator bool):
(JSC::AccessGenerationResult::kind):
(JSC::AccessGenerationResult::code):
(JSC::AccessGenerationResult::madeNoChanges):
(JSC::AccessGenerationResult::gaveUp):
(JSC::AccessGenerationResult::generatedNewCode):
(JSC::PolymorphicAccess::isEmpty):
(JSC::AccessGenerationState::AccessGenerationState):
* bytecode/StructureStubInfo.cpp:
(JSC::StructureStubInfo::aboutToDie):
(JSC::StructureStubInfo::addAccessCase):
* bytecode/StructureStubInfo.h:
* jit/AssemblyHelpers.cpp:
(JSC::AssemblyHelpers::emitStoreStructureWithTypeInfo):
(JSC::AssemblyHelpers::loadProperty):
(JSC::emitRandomThunkImpl):
(JSC::AssemblyHelpers::emitRandomThunk):
(JSC::AssemblyHelpers::emitLoadStructure):
* jit/AssemblyHelpers.h:
(JSC::AssemblyHelpers::loadValue):
(JSC::AssemblyHelpers::moveValueRegs):
(JSC::AssemblyHelpers::argumentsStart):
(JSC::AssemblyHelpers::emitStoreStructureWithTypeInfo):
(JSC::AssemblyHelpers::emitLoadStructure): Deleted.
* jit/GPRInfo.cpp:
(JSC::JSValueRegs::dump):
* jit/GPRInfo.h:
(JSC::JSValueRegs::uses):
* jit/Repatch.cpp:
(JSC::replaceWithJump):
(JSC::tryCacheGetByID):
(JSC::tryCachePutByID):
(JSC::tryRepatchIn):
* jit/ThunkGenerators.cpp:
(JSC::virtualThunkFor):
* runtime/Options.h:
* runtime/PropertyMapHashTable.h:
(JSC::PropertyTable::begin):
(JSC::PropertyTable::find):
(JSC::PropertyTable::get):
* runtime/Structure.h:

LayoutTests:

* js/regress/megamorphic-load-expected.txt: Added.
* js/regress/megamorphic-load.html: Added.
* js/regress/script-tests/megamorphic-load.js: Added.
* js/regress/string-repeat-not-resolving-no-inline-expected.txt: Added.
* js/regress/string-repeat-not-resolving-no-inline.html: Added.

Modified Paths

Added Paths

Diff

Modified: trunk/LayoutTests/ChangeLog (199068 => 199069)


--- trunk/LayoutTests/ChangeLog	2016-04-05 19:40:07 UTC (rev 199068)
+++ trunk/LayoutTests/ChangeLog	2016-04-05 19:58:04 UTC (rev 199069)
@@ -1,3 +1,16 @@
+2016-04-04  Filip Pizlo  <fpi...@apple.com>
+
+        PolymorphicAccess should have a MegamorphicLoad case
+        https://bugs.webkit.org/show_bug.cgi?id=156182
+
+        Reviewed by Geoffrey Garen and Keith Miller.
+
+        * js/regress/megamorphic-load-expected.txt: Added.
+        * js/regress/megamorphic-load.html: Added.
+        * js/regress/script-tests/megamorphic-load.js: Added.
+        * js/regress/string-repeat-not-resolving-no-inline-expected.txt: Added.
+        * js/regress/string-repeat-not-resolving-no-inline.html: Added.
+
 2016-04-05  Antti Koivisto  <an...@apple.com>
 
         Un-marking plugins/focus.html as flaky on mac

Added: trunk/LayoutTests/js/regress/megamorphic-load-expected.txt (0 => 199069)


--- trunk/LayoutTests/js/regress/megamorphic-load-expected.txt	                        (rev 0)
+++ trunk/LayoutTests/js/regress/megamorphic-load-expected.txt	2016-04-05 19:58:04 UTC (rev 199069)
@@ -0,0 +1,10 @@
+JSRegress/megamorphic-load
+
+On success, you will see a series of "PASS" messages, followed by "TEST COMPLETE".
+
+
+PASS no exception thrown
+PASS successfullyParsed is true
+
+TEST COMPLETE
+

Added: trunk/LayoutTests/js/regress/megamorphic-load.html (0 => 199069)


--- trunk/LayoutTests/js/regress/megamorphic-load.html	                        (rev 0)
+++ trunk/LayoutTests/js/regress/megamorphic-load.html	2016-04-05 19:58:04 UTC (rev 199069)
@@ -0,0 +1,12 @@
+<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML//EN">
+<html>
+<head>
+<script src=""
+</head>
+<body>
+<script src=""
+<script src=""
+<script src=""
+<script src=""
+</body>
+</html>

Added: trunk/LayoutTests/js/regress/script-tests/megamorphic-load.js (0 => 199069)


--- trunk/LayoutTests/js/regress/script-tests/megamorphic-load.js	                        (rev 0)
+++ trunk/LayoutTests/js/regress/script-tests/megamorphic-load.js	2016-04-05 19:58:04 UTC (rev 199069)
@@ -0,0 +1,15 @@
+(function() {
+    var array = [];
+    for (var i = 0; i < 1000; ++i) {
+        var o = {};
+        o["i" + i] = i;
+        o.f = 42;
+        array.push(o);
+    }
+    
+    for (var i = 0; i < 1000000; ++i) {
+        var result = array[i % array.length].f;
+        if (result != 42)
+            throw "Error: bad result: " + result;
+    }
+})();

Added: trunk/LayoutTests/js/regress/string-repeat-not-resolving-no-inline-expected.txt (0 => 199069)


--- trunk/LayoutTests/js/regress/string-repeat-not-resolving-no-inline-expected.txt	                        (rev 0)
+++ trunk/LayoutTests/js/regress/string-repeat-not-resolving-no-inline-expected.txt	2016-04-05 19:58:04 UTC (rev 199069)
@@ -0,0 +1,10 @@
+JSRegress/string-repeat-not-resolving-no-inline
+
+On success, you will see a series of "PASS" messages, followed by "TEST COMPLETE".
+
+
+PASS no exception thrown
+PASS successfullyParsed is true
+
+TEST COMPLETE
+

Added: trunk/LayoutTests/js/regress/string-repeat-not-resolving-no-inline.html (0 => 199069)


--- trunk/LayoutTests/js/regress/string-repeat-not-resolving-no-inline.html	                        (rev 0)
+++ trunk/LayoutTests/js/regress/string-repeat-not-resolving-no-inline.html	2016-04-05 19:58:04 UTC (rev 199069)
@@ -0,0 +1,12 @@
+<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML//EN">
+<html>
+<head>
+<script src=""
+</head>
+<body>
+<script src=""
+<script src=""
+<script src=""
+<script src=""
+</body>
+</html>

Modified: trunk/Source/_javascript_Core/ChangeLog (199068 => 199069)


--- trunk/Source/_javascript_Core/ChangeLog	2016-04-05 19:40:07 UTC (rev 199068)
+++ trunk/Source/_javascript_Core/ChangeLog	2016-04-05 19:58:04 UTC (rev 199069)
@@ -1,3 +1,120 @@
+2016-04-04  Filip Pizlo  <fpi...@apple.com>
+
+        PolymorphicAccess should have a MegamorphicLoad case
+        https://bugs.webkit.org/show_bug.cgi?id=156182
+
+        Reviewed by Geoffrey Garen and Keith Miller.
+
+        This introduces a new case to PolymorphicAccess called MegamorphicLoad. This inlines the lookup in
+        the PropertyTable. It's cheaper than switching on a huge number of cases and it's cheaper than
+        calling into C++ to do the same job - particularly since inlining the lookup into an access means
+        that we can precompute the hash code.
+
+        When writing the inline code for the hashtable lookup, I found that our hashing algorithm was not
+        optimal. It used a double-hashing method for reducing collision pathologies. This is great for
+        improving the performance of some worst-case scenarios. But this misses the point of a hashtable: we
+        want to optimize the average-case performance. When optimizing for average-case, we can choose to
+        either focus on maximizing the likelihood of the fast case happening, or to minimize the cost of the
+        worst-case, or to minimize the cost of the fast case. Even a very basic hashtable will achieve a high
+        probability of hitting the fast case. So, doing work to reduce the likelihood of a worst-case
+        pathology only makes sense if it also preserves the good performance of the fast case, or reduces the
+        likelihood of the worst-case by so much that it's a win for the average case even with a slow-down in
+        the fast case.
+
+        I don't believe, based on looking at how the double-hashing is implemented, that it's possible that
+        this preserves the good performance of the fast case. It requires at least one more value to be live
+        around the loop, and dramatically increases the register pressure at key points inside the loop. The
+        biggest offender is the doubleHash() method itself. There is no getting around how bad this is: if
+        the compiler live-range-splits that method to death to avoid degrading register pressure elsewhere
+        then we will pay a steep price anytime we take the second iteration around the loop; but if the
+        compiler doesn't split around the call then the hashtable lookup fast path will be full of spills on
+        some architectures (I performed biological register allocation and found that I needed 9 registers
+        for complete lookup, while x86-64 has only 6 callee-saves; OTOH ARM64 has 10 callee-saves so it might
+        be better off).
+
+        Hence, this patch changes the hashtable lookup to use simple linear probing. This was not a slow-down
+        on anything, and it made MegamorphicLoad much more sensible since it is less likely to have to spill.
+
+        There are some other small changes in this patch, like rationalizing the IC's choice between giving
+        up after a repatch (i.e. never trying again) and just pretending that nothing happened (so we can
+        try to repatch again in the future). It looked like the code in Repatch.cpp was set up to be able to
+        choose between those options, but we weren't fully taking advantage of it because the
+        regenerateWithCase() method just returned null for any failure, and didn't say whether it was the
+        sort of failure that renders the inline cache unrepatchable (like memory allocation failure). Now
+        this is all made explicit. I wanted to make sure this change happened in this patch since the
+        MegamorphicLoad code automagically generates a MegamorphicLoad case by coalescing other cases. Since
+        this is intended to avoid blowing out the cache and making it unrepatchable, I wanted to make sure
+        that the rules for giving up were something that made sense to me.
+        
+        This is a big win on microbenchmarks. It's neutral on traditional JS benchmarks. It's a slight
+        speed-up for page loading, because many real websites like to have megamorphic property accesses.
+
+        * bytecode/PolymorphicAccess.cpp:
+        (JSC::AccessGenerationResult::dump):
+        (JSC::AccessGenerationState::addWatchpoint):
+        (JSC::AccessCase::get):
+        (JSC::AccessCase::megamorphicLoad):
+        (JSC::AccessCase::replace):
+        (JSC::AccessCase::guardedByStructureCheck):
+        (JSC::AccessCase::couldStillSucceed):
+        (JSC::AccessCase::canBeReplacedByMegamorphicLoad):
+        (JSC::AccessCase::canReplace):
+        (JSC::AccessCase::generateWithGuard):
+        (JSC::AccessCase::generate):
+        (JSC::PolymorphicAccess::PolymorphicAccess):
+        (JSC::PolymorphicAccess::~PolymorphicAccess):
+        (JSC::PolymorphicAccess::regenerateWithCases):
+        (JSC::PolymorphicAccess::regenerateWithCase):
+        (WTF::printInternal):
+        * bytecode/PolymorphicAccess.h:
+        (JSC::AccessCase::isGet):
+        (JSC::AccessCase::isPut):
+        (JSC::AccessCase::isIn):
+        (JSC::AccessGenerationResult::AccessGenerationResult):
+        (JSC::AccessGenerationResult::operator==):
+        (JSC::AccessGenerationResult::operator!=):
+        (JSC::AccessGenerationResult::operator bool):
+        (JSC::AccessGenerationResult::kind):
+        (JSC::AccessGenerationResult::code):
+        (JSC::AccessGenerationResult::madeNoChanges):
+        (JSC::AccessGenerationResult::gaveUp):
+        (JSC::AccessGenerationResult::generatedNewCode):
+        (JSC::PolymorphicAccess::isEmpty):
+        (JSC::AccessGenerationState::AccessGenerationState):
+        * bytecode/StructureStubInfo.cpp:
+        (JSC::StructureStubInfo::aboutToDie):
+        (JSC::StructureStubInfo::addAccessCase):
+        * bytecode/StructureStubInfo.h:
+        * jit/AssemblyHelpers.cpp:
+        (JSC::AssemblyHelpers::emitStoreStructureWithTypeInfo):
+        (JSC::AssemblyHelpers::loadProperty):
+        (JSC::emitRandomThunkImpl):
+        (JSC::AssemblyHelpers::emitRandomThunk):
+        (JSC::AssemblyHelpers::emitLoadStructure):
+        * jit/AssemblyHelpers.h:
+        (JSC::AssemblyHelpers::loadValue):
+        (JSC::AssemblyHelpers::moveValueRegs):
+        (JSC::AssemblyHelpers::argumentsStart):
+        (JSC::AssemblyHelpers::emitStoreStructureWithTypeInfo):
+        (JSC::AssemblyHelpers::emitLoadStructure): Deleted.
+        * jit/GPRInfo.cpp:
+        (JSC::JSValueRegs::dump):
+        * jit/GPRInfo.h:
+        (JSC::JSValueRegs::uses):
+        * jit/Repatch.cpp:
+        (JSC::replaceWithJump):
+        (JSC::tryCacheGetByID):
+        (JSC::tryCachePutByID):
+        (JSC::tryRepatchIn):
+        * jit/ThunkGenerators.cpp:
+        (JSC::virtualThunkFor):
+        * runtime/Options.h:
+        * runtime/PropertyMapHashTable.h:
+        (JSC::PropertyTable::begin):
+        (JSC::PropertyTable::find):
+        (JSC::PropertyTable::get):
+        * runtime/Structure.h:
+
 2016-04-05  Antoine Quint  <grao...@apple.com>
 
         [WebGL2] Turn the ENABLE_WEBGL2 flag on

Modified: trunk/Source/_javascript_Core/bytecode/PolymorphicAccess.cpp (199068 => 199069)


--- trunk/Source/_javascript_Core/bytecode/PolymorphicAccess.cpp	2016-04-05 19:40:07 UTC (rev 199068)
+++ trunk/Source/_javascript_Core/bytecode/PolymorphicAccess.cpp	2016-04-05 19:58:04 UTC (rev 199069)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2014, 2015 Apple Inc. All rights reserved.
+ * Copyright (C) 2014-2016 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -46,6 +46,13 @@
 
 static const bool verbose = false;
 
+void AccessGenerationResult::dump(PrintStream& out) const
+{
+    out.print(m_kind);
+    if (m_code)
+        out.print(":", m_code);
+}
+
 Watchpoint* AccessGenerationState::addWatchpoint(const ObjectPropertyCondition& condition)
 {
     return WatchpointsOnStructureStubInfo::ensureReferenceAndAddWatchpoint(
@@ -175,6 +182,21 @@
     return result;
 }
 
+std::unique_ptr<AccessCase> AccessCase::megamorphicLoad(VM& vm, JSCell* owner)
+{
+    UNUSED_PARAM(vm);
+    UNUSED_PARAM(owner);
+    
+    if (GPRInfo::numberOfRegisters < 9)
+        return nullptr;
+    
+    std::unique_ptr<AccessCase> result(new AccessCase());
+    
+    result->m_type = MegamorphicLoad;
+    
+    return result;
+}
+
 std::unique_ptr<AccessCase> AccessCase::replace(
     VM& vm, JSCell* owner, Structure* structure, PropertyOffset offset)
 {
@@ -322,6 +344,7 @@
         return false;
 
     switch (m_type) {
+    case MegamorphicLoad:
     case ArrayLength:
     case StringLength:
         return false;
@@ -342,9 +365,21 @@
     return m_conditionSet.structuresEnsureValidityAssumingImpurePropertyWatchpoint();
 }
 
-bool AccessCase::canReplace(const AccessCase& other)
+bool AccessCase::canBeReplacedByMegamorphicLoad() const
 {
+    return type() == Load
+        && !viaProxy()
+        && conditionSet().isEmpty()
+        && !additionalSet()
+        && !customSlotBase();
+}
+
+bool AccessCase::canReplace(const AccessCase& other) const
+{
     // We could do a lot better here, but for now we just do something obvious.
+    
+    if (type() == MegamorphicLoad && other.canBeReplacedByMegamorphicLoad())
+        return true;
 
     if (!guardedByStructureCheck() || !other.guardedByStructureCheck()) {
         // FIXME: Implement this!
@@ -407,17 +442,25 @@
     AccessGenerationState& state, CCallHelpers::JumpList& fallThrough)
 {
     CCallHelpers& jit = *state.jit;
+    VM& vm = *jit.vm();
+    const Identifier& ident = *state.ident;
+    StructureStubInfo& stubInfo = *state.stubInfo;
+    JSValueRegs valueRegs = state.valueRegs;
+    GPRReg baseGPR = state.baseGPR;
+    GPRReg scratchGPR = state.scratchGPR;
+    
+    UNUSED_PARAM(vm);
 
     switch (m_type) {
     case ArrayLength: {
         ASSERT(!viaProxy());
-        jit.load8(CCallHelpers::Address(state.baseGPR, JSCell::indexingTypeOffset()), state.scratchGPR);
+        jit.load8(CCallHelpers::Address(baseGPR, JSCell::indexingTypeOffset()), scratchGPR);
         fallThrough.append(
             jit.branchTest32(
-                CCallHelpers::Zero, state.scratchGPR, CCallHelpers::TrustedImm32(IsArray)));
+                CCallHelpers::Zero, scratchGPR, CCallHelpers::TrustedImm32(IsArray)));
         fallThrough.append(
             jit.branchTest32(
-                CCallHelpers::Zero, state.scratchGPR, CCallHelpers::TrustedImm32(IndexingShapeMask)));
+                CCallHelpers::Zero, scratchGPR, CCallHelpers::TrustedImm32(IndexingShapeMask)));
         break;
     }
 
@@ -426,33 +469,146 @@
         fallThrough.append(
             jit.branch8(
                 CCallHelpers::NotEqual,
-                CCallHelpers::Address(state.baseGPR, JSCell::typeInfoTypeOffset()),
+                CCallHelpers::Address(baseGPR, JSCell::typeInfoTypeOffset()),
                 CCallHelpers::TrustedImm32(StringType)));
         break;
     }
+        
+    case MegamorphicLoad: {
+        UniquedStringImpl* key = ident.impl();
+        unsigned hash = IdentifierRepHash::hash(key);
+        
+        ScratchRegisterAllocator allocator(stubInfo.patch.usedRegisters);
+        allocator.lock(baseGPR);
+#if USE(JSVALUE32_64)
+        allocator.lock(static_cast<GPRReg>(stubInfo.patch.baseTagGPR));
+#endif
+        allocator.lock(valueRegs);
+        allocator.lock(scratchGPR);
+        
+        GPRReg intermediateGPR = scratchGPR;
+        GPRReg maskGPR = allocator.allocateScratchGPR();
+        GPRReg maskedHashGPR = allocator.allocateScratchGPR();
+        GPRReg indexGPR = allocator.allocateScratchGPR();
+        GPRReg offsetGPR = allocator.allocateScratchGPR();
+        
+        if (verbose) {
+            dataLog("baseGPR = ", baseGPR, "\n");
+            dataLog("valueRegs = ", valueRegs, "\n");
+            dataLog("scratchGPR = ", scratchGPR, "\n");
+            dataLog("intermediateGPR = ", intermediateGPR, "\n");
+            dataLog("maskGPR = ", maskGPR, "\n");
+            dataLog("maskedHashGPR = ", maskedHashGPR, "\n");
+            dataLog("indexGPR = ", indexGPR, "\n");
+            dataLog("offsetGPR = ", offsetGPR, "\n");
+        }
 
+        ScratchRegisterAllocator::PreservedState preservedState =
+            allocator.preserveReusedRegistersByPushing(jit, ScratchRegisterAllocator::ExtraStackSpace::SpaceForCCall);
+
+        CCallHelpers::JumpList myFailAndIgnore;
+        CCallHelpers::JumpList myFallThrough;
+        
+        jit.emitLoadStructure(baseGPR, intermediateGPR, maskGPR);
+        jit.loadPtr(
+            CCallHelpers::Address(intermediateGPR, Structure::propertyTableUnsafeOffset()),
+            intermediateGPR);
+        
+        myFailAndIgnore.append(jit.branchTestPtr(CCallHelpers::Zero, intermediateGPR));
+        
+        jit.load32(CCallHelpers::Address(intermediateGPR, PropertyTable::offsetOfIndexMask()), maskGPR);
+        jit.loadPtr(CCallHelpers::Address(intermediateGPR, PropertyTable::offsetOfIndex()), indexGPR);
+        jit.load32(
+            CCallHelpers::Address(intermediateGPR, PropertyTable::offsetOfIndexSize()),
+            intermediateGPR);
+
+        jit.move(maskGPR, maskedHashGPR);
+        jit.and32(CCallHelpers::TrustedImm32(hash), maskedHashGPR);
+        jit.lshift32(CCallHelpers::TrustedImm32(2), intermediateGPR);
+        jit.addPtr(indexGPR, intermediateGPR);
+        
+        CCallHelpers::Label loop = jit.label();
+        
+        jit.load32(CCallHelpers::BaseIndex(indexGPR, maskedHashGPR, CCallHelpers::TimesFour), offsetGPR);
+        
+        myFallThrough.append(
+            jit.branch32(
+                CCallHelpers::Equal,
+                offsetGPR,
+                CCallHelpers::TrustedImm32(PropertyTable::EmptyEntryIndex)));
+        
+        jit.sub32(CCallHelpers::TrustedImm32(1), offsetGPR);
+        jit.mul32(CCallHelpers::TrustedImm32(sizeof(PropertyMapEntry)), offsetGPR, offsetGPR);
+        jit.addPtr(intermediateGPR, offsetGPR);
+        
+        CCallHelpers::Jump collision =  jit.branchPtr(
+            CCallHelpers::NotEqual,
+            CCallHelpers::Address(offsetGPR, OBJECT_OFFSETOF(PropertyMapEntry, key)),
+            CCallHelpers::TrustedImmPtr(key));
+        
+        // offsetGPR currently holds a pointer to the PropertyMapEntry, which has the offset and attributes.
+        // Check them and then attempt the load.
+        
+        myFallThrough.append(
+            jit.branchTest32(
+                CCallHelpers::NonZero,
+                CCallHelpers::Address(offsetGPR, OBJECT_OFFSETOF(PropertyMapEntry, attributes)),
+                CCallHelpers::TrustedImm32(Accessor | CustomAccessor)));
+        
+        jit.load32(CCallHelpers::Address(offsetGPR, OBJECT_OFFSETOF(PropertyMapEntry, offset)), offsetGPR);
+        
+        jit.loadProperty(baseGPR, offsetGPR, valueRegs);
+        
+        allocator.restoreReusedRegistersByPopping(jit, preservedState);
+        state.succeed();
+        
+        collision.link(&jit);
+
+        jit.add32(CCallHelpers::TrustedImm32(1), maskedHashGPR);
+        
+        // FIXME: We could be smarter about this. Currently we're burning a GPR for the mask. But looping
+        // around isn't super common so we could, for example, recompute the mask from the difference between
+        // the table and index. But before we do that we should probably make it easier to multiply and
+        // divide by the size of PropertyMapEntry. That probably involves making PropertyMapEntry be arranged
+        // to have a power-of-2 size.
+        jit.and32(maskGPR, maskedHashGPR);
+        jit.jump().linkTo(loop, &jit);
+        
+        if (allocator.didReuseRegisters()) {
+            myFailAndIgnore.link(&jit);
+            allocator.restoreReusedRegistersByPopping(jit, preservedState);
+            state.failAndIgnore.append(jit.jump());
+            
+            myFallThrough.link(&jit);
+            allocator.restoreReusedRegistersByPopping(jit, preservedState);
+            fallThrough.append(jit.jump());
+        } else {
+            state.failAndIgnore.append(myFailAndIgnore);
+            fallThrough.append(myFallThrough);
+        }
+        return;
+    }
+
     default: {
         if (viaProxy()) {
             fallThrough.append(
                 jit.branch8(
                     CCallHelpers::NotEqual,
-                    CCallHelpers::Address(state.baseGPR, JSCell::typeInfoTypeOffset()),
+                    CCallHelpers::Address(baseGPR, JSCell::typeInfoTypeOffset()),
                     CCallHelpers::TrustedImm32(PureForwardingProxyType)));
 
-            jit.loadPtr(
-                CCallHelpers::Address(state.baseGPR, JSProxy::targetOffset()),
-                state.scratchGPR);
+            jit.loadPtr(CCallHelpers::Address(baseGPR, JSProxy::targetOffset()), scratchGPR);
 
             fallThrough.append(
                 jit.branchStructure(
                     CCallHelpers::NotEqual,
-                    CCallHelpers::Address(state.scratchGPR, JSCell::structureIDOffset()),
+                    CCallHelpers::Address(scratchGPR, JSCell::structureIDOffset()),
                     structure()));
         } else {
             fallThrough.append(
                 jit.branchStructure(
                     CCallHelpers::NotEqual,
-                    CCallHelpers::Address(state.baseGPR, JSCell::structureIDOffset()),
+                    CCallHelpers::Address(baseGPR, JSCell::structureIDOffset()),
                     structure()));
         }
         break;
@@ -1091,15 +1247,22 @@
 
         emitIntrinsicGetter(state);
         return;
-    } }
+    }
     
+    case MegamorphicLoad:
+        // These need to be handled by generateWithGuard(), since the guard is part of the megamorphic load
+        // algorithm. We can be sure that nobody will call generate() directly for MegamorphicLoad since
+        // MegamorphicLoad is not guarded by a structure check.
+        RELEASE_ASSERT_NOT_REACHED();
+    }
+    
     RELEASE_ASSERT_NOT_REACHED();
 }
 
 PolymorphicAccess::PolymorphicAccess() { }
 PolymorphicAccess::~PolymorphicAccess() { }
 
-MacroAssemblerCodePtr PolymorphicAccess::regenerateWithCases(
+AccessGenerationResult PolymorphicAccess::regenerateWithCases(
     VM& vm, CodeBlock* codeBlock, StructureStubInfo& stubInfo, const Identifier& ident,
     Vector<std::unique_ptr<AccessCase>> originalCasesToAdd)
 {
@@ -1114,8 +1277,7 @@
     //   and the previous stub are kept intact and the new cases are destroyed. It's OK to attempt to
     //   add more things after failure.
     
-    // First, verify that we can generate code for all of the new cases while eliminating any of the
-    // new cases that replace each other.
+    // First ensure that the originalCasesToAdd doesn't contain duplicates.
     Vector<std::unique_ptr<AccessCase>> casesToAdd;
     for (unsigned i = 0; i < originalCasesToAdd.size(); ++i) {
         std::unique_ptr<AccessCase> myCase = WTFMove(originalCasesToAdd[i]);
@@ -1142,7 +1304,7 @@
     // new stub that will be identical to the old one. Returning null should tell the caller to just
     // keep doing what they were doing before.
     if (casesToAdd.isEmpty())
-        return MacroAssemblerCodePtr();
+        return AccessGenerationResult::MadeNoChanges;
 
     // Now construct the list of cases as they should appear if we are successful. This means putting
     // all of the previous cases in this list in order but excluding those that can be replaced, and
@@ -1171,22 +1333,43 @@
 
     if (verbose)
         dataLog("newCases: ", listDump(newCases), "\n");
+    
+    // See if we are close to having too many cases and if some of those cases can be subsumed by a
+    // megamorphic load.
+    if (newCases.size() >= Options::maxAccessVariantListSize()) {
+        unsigned numSelfLoads = 0;
+        for (auto& newCase : newCases) {
+            if (newCase->canBeReplacedByMegamorphicLoad())
+                numSelfLoads++;
+        }
+        
+        if (numSelfLoads >= Options::megamorphicLoadCost()) {
+            if (auto mega = AccessCase::megamorphicLoad(vm, codeBlock)) {
+                newCases.removeAllMatching(
+                    [&] (std::unique_ptr<AccessCase>& newCase) -> bool {
+                        return newCase->canBeReplacedByMegamorphicLoad();
+                    });
+                
+                newCases.append(WTFMove(mega));
+            }
+        }
+    }
 
     if (newCases.size() > Options::maxAccessVariantListSize()) {
         if (verbose)
             dataLog("Too many cases.\n");
-        return MacroAssemblerCodePtr();
+        return AccessGenerationResult::GaveUp;
     }
 
     MacroAssemblerCodePtr result = regenerate(vm, codeBlock, stubInfo, ident, newCases);
     if (!result)
-        return MacroAssemblerCodePtr();
+        return AccessGenerationResult::GaveUp;
 
     m_list = WTFMove(newCases);
     return result;
 }
 
-MacroAssemblerCodePtr PolymorphicAccess::regenerateWithCase(
+AccessGenerationResult PolymorphicAccess::regenerateWithCase(
     VM& vm, CodeBlock* codeBlock, StructureStubInfo& stubInfo, const Identifier& ident,
     std::unique_ptr<AccessCase> newAccess)
 {
@@ -1403,12 +1586,32 @@
 
 using namespace JSC;
 
+void printInternal(PrintStream& out, AccessGenerationResult::Kind kind)
+{
+    switch (kind) {
+    case AccessGenerationResult::MadeNoChanges:
+        out.print("MadeNoChanges");
+        return;
+    case AccessGenerationResult::GaveUp:
+        out.print("GaveUp");
+        return;
+    case AccessGenerationResult::GeneratedNewCode:
+        out.print("GeneratedNewCode");
+        return;
+    }
+    
+    RELEASE_ASSERT_NOT_REACHED();
+}
+
 void printInternal(PrintStream& out, AccessCase::AccessType type)
 {
     switch (type) {
     case AccessCase::Load:
         out.print("Load");
         return;
+    case AccessCase::MegamorphicLoad:
+        out.print("MegamorphicLoad");
+        return;
     case AccessCase::Transition:
         out.print("Transition");
         return;

Modified: trunk/Source/_javascript_Core/bytecode/PolymorphicAccess.h (199068 => 199069)


--- trunk/Source/_javascript_Core/bytecode/PolymorphicAccess.h	2016-04-05 19:40:07 UTC (rev 199068)
+++ trunk/Source/_javascript_Core/bytecode/PolymorphicAccess.h	2016-04-05 19:58:04 UTC (rev 199069)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2014, 2015 Apple Inc. All rights reserved.
+ * Copyright (C) 2014-2016 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -53,6 +53,7 @@
 public:
     enum AccessType {
         Load,
+        MegamorphicLoad,
         Transition,
         Replace,
         Miss,
@@ -81,6 +82,7 @@
         case InMiss:
             return false;
         case Load:
+        case MegamorphicLoad:
         case Miss:
         case Getter:
         case CustomValueGetter:
@@ -96,6 +98,7 @@
     {
         switch (type) {
         case Load:
+        case MegamorphicLoad:
         case Miss:
         case Getter:
         case CustomValueGetter:
@@ -119,6 +122,7 @@
     {
         switch (type) {
         case Load:
+        case MegamorphicLoad:
         case Miss:
         case Getter:
         case CustomValueGetter:
@@ -145,7 +149,9 @@
         WatchpointSet* additionalSet = nullptr,
         PropertySlot::GetValueFunc = nullptr,
         JSObject* customSlotBase = nullptr);
-
+    
+    static std::unique_ptr<AccessCase> megamorphicLoad(VM&, JSCell* owner);
+    
     static std::unique_ptr<AccessCase> replace(VM&, JSCell* owner, Structure*, PropertyOffset);
 
     static std::unique_ptr<AccessCase> transition(
@@ -247,13 +253,15 @@
 
     // Is it still possible for this case to ever be taken?
     bool couldStillSucceed() const;
-
+    
     static bool canEmitIntrinsicGetter(JSFunction*, Structure*);
 
+    bool canBeReplacedByMegamorphicLoad() const;
+
     // If this method returns true, then it's a good idea to remove 'other' from the access once 'this'
     // is added. This method assumes that in case of contradictions, 'this' represents a newer, and so
     // more useful, truth. This method can be conservative; it will return false when it doubt.
-    bool canReplace(const AccessCase& other);
+    bool canReplace(const AccessCase& other) const;
 
     void dump(PrintStream& out) const;
     
@@ -308,6 +316,61 @@
     std::unique_ptr<RareData> m_rareData;
 };
 
+class AccessGenerationResult {
+public:
+    enum Kind {
+        MadeNoChanges,
+        GaveUp,
+        GeneratedNewCode
+    };
+    
+    AccessGenerationResult()
+    {
+    }
+    
+    AccessGenerationResult(Kind kind)
+        : m_kind(kind)
+    {
+        ASSERT(kind != GeneratedNewCode);
+    }
+    
+    AccessGenerationResult(MacroAssemblerCodePtr code)
+        : m_kind(GeneratedNewCode)
+        , m_code(code)
+    {
+        RELEASE_ASSERT(code);
+    }
+    
+    bool operator==(const AccessGenerationResult& other) const
+    {
+        return m_kind == other.m_kind && m_code == other.m_code;
+    }
+    
+    bool operator!=(const AccessGenerationResult& other) const
+    {
+        return !(*this == other);
+    }
+    
+    explicit operator bool() const
+    {
+        return *this != AccessGenerationResult();
+    }
+    
+    Kind kind() const { return m_kind; }
+    
+    const MacroAssemblerCodePtr& code() const { return m_code; }
+    
+    bool madeNoChanges() const { return m_kind == MadeNoChanges; }
+    bool gaveUp() const { return m_kind == GaveUp; }
+    bool generatedNewCode() const { return m_kind == GeneratedNewCode; }
+    
+    void dump(PrintStream&) const;
+    
+private:
+    Kind m_kind;
+    MacroAssemblerCodePtr m_code;
+};
+
 class PolymorphicAccess {
     WTF_MAKE_NONCOPYABLE(PolymorphicAccess);
     WTF_MAKE_FAST_ALLOCATED;
@@ -318,10 +381,10 @@
     // This may return null, in which case the old stub routine is left intact. You are required to
     // pass a vector of non-null access cases. This will prune the access cases by rejecting any case
     // in the list that is subsumed by a later case in the list.
-    MacroAssemblerCodePtr regenerateWithCases(
+    AccessGenerationResult regenerateWithCases(
         VM&, CodeBlock*, StructureStubInfo&, const Identifier&, Vector<std::unique_ptr<AccessCase>>);
 
-    MacroAssemblerCodePtr regenerateWithCase(
+    AccessGenerationResult regenerateWithCase(
         VM&, CodeBlock*, StructureStubInfo&, const Identifier&, std::unique_ptr<AccessCase>);
     
     bool isEmpty() const { return m_list.isEmpty(); }
@@ -362,9 +425,9 @@
 
 struct AccessGenerationState {
     AccessGenerationState()
-    : m_calculatedRegistersForCallAndExceptionHandling(false)
-    , m_needsToRestoreRegistersIfException(false)
-    , m_calculatedCallSiteIndex(false)
+        : m_calculatedRegistersForCallAndExceptionHandling(false)
+        , m_needsToRestoreRegistersIfException(false)
+        , m_calculatedCallSiteIndex(false)
     {
     }
     CCallHelpers* jit { nullptr };
@@ -441,6 +504,7 @@
 
 namespace WTF {
 
+void printInternal(PrintStream&, JSC::AccessGenerationResult::Kind);
 void printInternal(PrintStream&, JSC::AccessCase::AccessType);
 
 } // namespace WTF

Modified: trunk/Source/_javascript_Core/bytecode/StructureStubInfo.cpp (199068 => 199069)


--- trunk/Source/_javascript_Core/bytecode/StructureStubInfo.cpp	2016-04-05 19:40:07 UTC (rev 199068)
+++ trunk/Source/_javascript_Core/bytecode/StructureStubInfo.cpp	2016-04-05 19:58:04 UTC (rev 199069)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2008, 2014, 2015 Apple Inc. All rights reserved.
+ * Copyright (C) 2008, 2014-2016 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -104,13 +104,13 @@
     RELEASE_ASSERT_NOT_REACHED();
 }
 
-MacroAssemblerCodePtr StructureStubInfo::addAccessCase(
+AccessGenerationResult StructureStubInfo::addAccessCase(
     CodeBlock* codeBlock, const Identifier& ident, std::unique_ptr<AccessCase> accessCase)
 {
     VM& vm = *codeBlock->vm();
     
     if (!accessCase)
-        return MacroAssemblerCodePtr();
+        return AccessGenerationResult::MadeNoChanges;
     
     if (cacheType == CacheType::Stub)
         return u.stub->regenerateWithCase(vm, codeBlock, *this, ident, WTFMove(accessCase));
@@ -126,11 +126,11 @@
 
     accessCases.append(WTFMove(accessCase));
 
-    MacroAssemblerCodePtr result =
+    AccessGenerationResult result =
         access->regenerateWithCases(vm, codeBlock, *this, ident, WTFMove(accessCases));
 
-    if (!result)
-        return MacroAssemblerCodePtr();
+    if (!result.generatedNewCode())
+        return result;
 
     initStub(codeBlock, WTFMove(access));
     return result;

Modified: trunk/Source/_javascript_Core/bytecode/StructureStubInfo.h (199068 => 199069)


--- trunk/Source/_javascript_Core/bytecode/StructureStubInfo.h	2016-04-05 19:40:07 UTC (rev 199068)
+++ trunk/Source/_javascript_Core/bytecode/StructureStubInfo.h	2016-04-05 19:58:04 UTC (rev 199069)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2008, 2012-2015 Apple Inc. All rights reserved.
+ * Copyright (C) 2008, 2012-2016 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -33,7 +33,6 @@
 #include "ObjectPropertyConditionSet.h"
 #include "Opcode.h"
 #include "Options.h"
-#include "PolymorphicAccess.h"
 #include "RegisterSet.h"
 #include "Structure.h"
 #include "StructureStubClearingWatchpoint.h"
@@ -42,6 +41,8 @@
 
 #if ENABLE(JIT)
 
+class AccessCase;
+class AccessGenerationResult;
 class PolymorphicAccess;
 
 enum class AccessType : int8_t {
@@ -68,8 +69,7 @@
     void initPutByIdReplace(CodeBlock*, Structure* baseObjectStructure, PropertyOffset);
     void initStub(CodeBlock*, std::unique_ptr<PolymorphicAccess>);
 
-    MacroAssemblerCodePtr addAccessCase(
-        CodeBlock*, const Identifier&, std::unique_ptr<AccessCase>);
+    AccessGenerationResult addAccessCase(CodeBlock*, const Identifier&, std::unique_ptr<AccessCase>);
 
     void reset(CodeBlock*);
 

Modified: trunk/Source/_javascript_Core/jit/AssemblyHelpers.cpp (199068 => 199069)


--- trunk/Source/_javascript_Core/jit/AssemblyHelpers.cpp	2016-04-05 19:40:07 UTC (rev 199068)
+++ trunk/Source/_javascript_Core/jit/AssemblyHelpers.cpp	2016-04-05 19:58:04 UTC (rev 199069)
@@ -419,7 +419,43 @@
 #endif
 }
 
+void AssemblyHelpers::loadProperty(GPRReg object, GPRReg offset, JSValueRegs result)
+{
+    Jump isInline = branch32(LessThan, offset, TrustedImm32(firstOutOfLineOffset));
+    
+    loadPtr(Address(object, JSObject::butterflyOffset()), result.payloadGPR());
+    neg32(offset);
+    signExtend32ToPtr(offset, offset);
+    Jump ready = jump();
+    
+    isInline.link(this);
+    addPtr(
+        TrustedImm32(
+            static_cast<int32_t>(sizeof(JSObject)) -
+            (static_cast<int32_t>(firstOutOfLineOffset) - 2) * static_cast<int32_t>(sizeof(EncodedJSValue))),
+        object, result.payloadGPR());
+    
+    ready.link(this);
+    
+    loadValue(
+        BaseIndex(
+            result.payloadGPR(), offset, TimesEight, (firstOutOfLineOffset - 2) * sizeof(EncodedJSValue)),
+        result);
+}
+
+void AssemblyHelpers::emitLoadStructure(RegisterID source, RegisterID dest, RegisterID scratch)
+{
 #if USE(JSVALUE64)
+    load32(MacroAssembler::Address(source, JSCell::structureIDOffset()), dest);
+    loadPtr(vm()->heap.structureIDTable().base(), scratch);
+    loadPtr(MacroAssembler::BaseIndex(scratch, dest, MacroAssembler::TimesEight), dest);
+#else
+    UNUSED_PARAM(scratch);
+    loadPtr(MacroAssembler::Address(source, JSCell::structureIDOffset()), dest);
+#endif
+}
+
+#if USE(JSVALUE64)
 template<typename LoadFromHigh, typename StoreToHigh, typename LoadFromLow, typename StoreToLow>
 void emitRandomThunkImpl(AssemblyHelpers& jit, GPRReg scratch0, GPRReg scratch1, GPRReg scratch2, FPRReg result, const LoadFromHigh& loadFromHigh, const StoreToHigh& storeToHigh, const LoadFromLow& loadFromLow, const StoreToLow& storeToLow)
 {

Modified: trunk/Source/_javascript_Core/jit/AssemblyHelpers.h (199068 => 199069)


--- trunk/Source/_javascript_Core/jit/AssemblyHelpers.h	2016-04-05 19:40:07 UTC (rev 199068)
+++ trunk/Source/_javascript_Core/jit/AssemblyHelpers.h	2016-04-05 19:58:04 UTC (rev 199069)
@@ -149,6 +149,9 @@
         }
 #endif
     }
+    
+    // Note that this clobbers offset.
+    void loadProperty(GPRReg object, GPRReg offset, JSValueRegs result);
 
     void moveValueRegs(JSValueRegs srcRegs, JSValueRegs destRegs)
     {
@@ -1209,30 +1212,8 @@
         return argumentsStart(codeOrigin.inlineCallFrame);
     }
     
-    void emitLoadStructure(RegisterID source, RegisterID dest, RegisterID scratch)
-    {
-#if USE(JSVALUE64)
-        load32(MacroAssembler::Address(source, JSCell::structureIDOffset()), dest);
-        loadPtr(vm()->heap.structureIDTable().base(), scratch);
-        loadPtr(MacroAssembler::BaseIndex(scratch, dest, MacroAssembler::TimesEight), dest);
-#else
-        UNUSED_PARAM(scratch);
-        loadPtr(MacroAssembler::Address(source, JSCell::structureIDOffset()), dest);
-#endif
-    }
+    void emitLoadStructure(RegisterID source, RegisterID dest, RegisterID scratch);
 
-    static void emitLoadStructure(AssemblyHelpers& jit, RegisterID base, RegisterID dest, RegisterID scratch)
-    {
-#if USE(JSVALUE64)
-        jit.load32(MacroAssembler::Address(base, JSCell::structureIDOffset()), dest);
-        jit.loadPtr(jit.vm()->heap.structureIDTable().base(), scratch);
-        jit.loadPtr(MacroAssembler::BaseIndex(scratch, dest, MacroAssembler::TimesEight), dest);
-#else
-        UNUSED_PARAM(scratch);
-        jit.loadPtr(MacroAssembler::Address(base, JSCell::structureIDOffset()), dest);
-#endif
-    }
-
     void emitStoreStructureWithTypeInfo(TrustedImmPtr structure, RegisterID dest, RegisterID)
     {
         emitStoreStructureWithTypeInfo(*this, structure, dest);

Modified: trunk/Source/_javascript_Core/jit/GPRInfo.cpp (199068 => 199069)


--- trunk/Source/_javascript_Core/jit/GPRInfo.cpp	2016-04-05 19:40:07 UTC (rev 199068)
+++ trunk/Source/_javascript_Core/jit/GPRInfo.cpp	2016-04-05 19:58:04 UTC (rev 199069)
@@ -30,6 +30,15 @@
 
 namespace JSC {
 
+void JSValueRegs::dump(PrintStream& out) const
+{
+#if USE(JSVALUE64)
+    out.print(m_gpr);
+#else
+    out.print("(tag:", tagGPR(), ", payload:", payloadGPR(), ")");
+#endif
+}
+
 // This is in the .cpp file to work around clang issues.
 #if CPU(X86_64)
 const GPRReg GPRInfo::patchpointScratchRegister = MacroAssembler::s_scratchRegister;

Modified: trunk/Source/_javascript_Core/jit/GPRInfo.h (199068 => 199069)


--- trunk/Source/_javascript_Core/jit/GPRInfo.h	2016-04-05 19:40:07 UTC (rev 199068)
+++ trunk/Source/_javascript_Core/jit/GPRInfo.h	2016-04-05 19:58:04 UTC (rev 199069)
@@ -77,6 +77,8 @@
     
     bool uses(GPRReg gpr) const { return m_gpr == gpr; }
     
+    void dump(PrintStream&) const;
+    
 private:
     GPRReg m_gpr;
 };
@@ -202,6 +204,8 @@
 
     bool uses(GPRReg gpr) const { return m_tagGPR == gpr || m_payloadGPR == gpr; }
     
+    void dump(PrintStream&) const;
+    
 private:
     int8_t m_tagGPR;
     int8_t m_payloadGPR;

Modified: trunk/Source/_javascript_Core/jit/Repatch.cpp (199068 => 199069)


--- trunk/Source/_javascript_Core/jit/Repatch.cpp	2016-04-05 19:40:07 UTC (rev 199068)
+++ trunk/Source/_javascript_Core/jit/Repatch.cpp	2016-04-05 19:58:04 UTC (rev 199069)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2011-2015 Apple Inc. All rights reserved.
+ * Copyright (C) 2011-2016 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -161,6 +161,8 @@
 
 static void replaceWithJump(StructureStubInfo& stubInfo, const MacroAssemblerCodePtr target)
 {
+    RELEASE_ASSERT(target);
+    
     if (MacroAssembler::canJumpReplacePatchableBranch32WithPatch()) {
         MacroAssembler::replaceWithJump(
             MacroAssembler::startOfPatchableBranch32WithPatchOnAddress(
@@ -315,14 +317,16 @@
         }
     }
 
-    MacroAssemblerCodePtr codePtr =
-        stubInfo.addAccessCase(codeBlock, propertyName, WTFMove(newCase));
+    AccessGenerationResult result = stubInfo.addAccessCase(codeBlock, propertyName, WTFMove(newCase));
 
-    if (!codePtr)
+    if (result.gaveUp())
         return GiveUpOnCache;
-
-    replaceWithJump(stubInfo, codePtr);
+    if (result.madeNoChanges())
+        return RetryCacheLater;
     
+    RELEASE_ASSERT(result.code());
+    replaceWithJump(stubInfo, result.code());
+    
     return RetryCacheLater;
 }
 
@@ -457,16 +461,19 @@
         }
     }
 
-    MacroAssemblerCodePtr codePtr = stubInfo.addAccessCase(codeBlock, ident, WTFMove(newCase));
+    AccessGenerationResult result = stubInfo.addAccessCase(codeBlock, ident, WTFMove(newCase));
     
-    if (!codePtr)
+    if (result.gaveUp())
         return GiveUpOnCache;
+    if (result.madeNoChanges())
+        return RetryCacheLater;
 
+    RELEASE_ASSERT(result.code());
     resetPutByIDCheckAndLoad(stubInfo);
     MacroAssembler::repatchJump(
         stubInfo.callReturnLocation.jumpAtOffset(
             stubInfo.patch.deltaCallToJump),
-        CodeLocationLabel(codePtr));
+        CodeLocationLabel(result.code()));
     
     return RetryCacheLater;
 }
@@ -514,13 +521,16 @@
     std::unique_ptr<AccessCase> newCase = AccessCase::in(
         vm, codeBlock, wasFound ? AccessCase::InHit : AccessCase::InMiss, structure, conditionSet);
 
-    MacroAssemblerCodePtr codePtr = stubInfo.addAccessCase(codeBlock, ident, WTFMove(newCase));
-    if (!codePtr)
+    AccessGenerationResult result = stubInfo.addAccessCase(codeBlock, ident, WTFMove(newCase));
+    if (result.gaveUp())
         return GiveUpOnCache;
+    if (result.madeNoChanges())
+        return RetryCacheLater;
 
+    RELEASE_ASSERT(result.code());
     MacroAssembler::repatchJump(
         stubInfo.callReturnLocation.jumpAtOffset(stubInfo.patch.deltaCallToJump),
-        CodeLocationLabel(codePtr));
+        CodeLocationLabel(result.code()));
     
     return RetryCacheLater;
 }

Modified: trunk/Source/_javascript_Core/jit/ThunkGenerators.cpp (199068 => 199069)


--- trunk/Source/_javascript_Core/jit/ThunkGenerators.cpp	2016-04-05 19:40:07 UTC (rev 199068)
+++ trunk/Source/_javascript_Core/jit/ThunkGenerators.cpp	2016-04-05 19:58:04 UTC (rev 199069)
@@ -191,7 +191,7 @@
             CCallHelpers::NotEqual, GPRInfo::regT1,
             CCallHelpers::TrustedImm32(JSValue::CellTag)));
 #endif
-    AssemblyHelpers::emitLoadStructure(jit, GPRInfo::regT0, GPRInfo::regT4, GPRInfo::regT1);
+    jit.emitLoadStructure(GPRInfo::regT0, GPRInfo::regT4, GPRInfo::regT1);
     slowCase.append(
         jit.branchPtr(
             CCallHelpers::NotEqual,

Modified: trunk/Source/_javascript_Core/runtime/Options.h (199068 => 199069)


--- trunk/Source/_javascript_Core/runtime/Options.h	2016-04-05 19:40:07 UTC (rev 199068)
+++ trunk/Source/_javascript_Core/runtime/Options.h	2016-04-05 19:58:04 UTC (rev 199069)
@@ -187,7 +187,8 @@
     v(bool, ftlCrashes, false, nullptr) /* fool-proof way of checking that you ended up in the FTL. ;-) */\
     v(bool, clobberAllRegsInFTLICSlowPath, !ASSERT_DISABLED, nullptr) \
     v(bool, useAccessInlining, true, nullptr) \
-    v(unsigned, maxAccessVariantListSize, 8, nullptr) \
+    v(unsigned, maxAccessVariantListSize, 13, nullptr) \
+    v(unsigned, megamorphicLoadCost, 10, nullptr) \
     v(bool, usePolyvariantDevirtualization, true, nullptr) \
     v(bool, usePolymorphicAccessInlining, true, nullptr) \
     v(bool, usePolymorphicCallInlining, true, nullptr) \

Modified: trunk/Source/_javascript_Core/runtime/PropertyMapHashTable.h (199068 => 199069)


--- trunk/Source/_javascript_Core/runtime/PropertyMapHashTable.h	2016-04-05 19:40:07 UTC (rev 199068)
+++ trunk/Source/_javascript_Core/runtime/PropertyMapHashTable.h	2016-04-05 19:58:04 UTC (rev 199069)
@@ -191,7 +191,13 @@
     size_t sizeInMemory();
     void checkConsistency();
 #endif
+    
+    static ptrdiff_t offsetOfIndexSize() { return OBJECT_OFFSETOF(PropertyTable, m_indexSize); }
+    static ptrdiff_t offsetOfIndexMask() { return OBJECT_OFFSETOF(PropertyTable, m_indexMask); }
+    static ptrdiff_t offsetOfIndex() { return OBJECT_OFFSETOF(PropertyTable, m_index); }
 
+    static const unsigned EmptyEntryIndex = 0;
+
 private:
     PropertyTable(VM&, unsigned initialCapacity);
     PropertyTable(VM&, const PropertyTable&);
@@ -244,7 +250,6 @@
     std::unique_ptr<Vector<PropertyOffset>> m_deletedOffsets;
 
     static const unsigned MinimumTableSize = 16;
-    static const unsigned EmptyEntryIndex = 0;
 };
 
 inline PropertyTable::iterator PropertyTable::begin()
@@ -272,7 +277,6 @@
     ASSERT(key);
     ASSERT(key->isAtomic() || key->isSymbol());
     unsigned hash = IdentifierRepHash::hash(key);
-    unsigned step = 0;
 
 #if DUMP_PROPERTYMAP_STATS
     ++propertyMapHashTableStats->numFinds;
@@ -285,19 +289,16 @@
         if (key == table()[entryIndex - 1].key)
             return std::make_pair(&table()[entryIndex - 1], hash & m_indexMask);
 
-        if (!step)
-            step = WTF::doubleHash(IdentifierRepHash::hash(key)) | 1;
-
 #if DUMP_PROPERTYMAP_STATS
         ++propertyMapHashTableStats->numCollisions;
 #endif
 
 #if DUMP_PROPERTYMAP_COLLISIONS
-        dataLog("PropertyTable collision for ", key, " (", hash, ") with step ", step, "\n");
+        dataLog("PropertyTable collision for ", key, " (", hash, ")\n");
         dataLog("Collided with ", table()[entryIndex - 1].key, "(", IdentifierRepHash::hash(table()[entryIndex - 1].key), ")\n");
 #endif
 
-        hash += step;
+        hash++;
     }
 }
 
@@ -310,7 +311,6 @@
         return nullptr;
 
     unsigned hash = IdentifierRepHash::hash(key);
-    unsigned step = 0;
 
 #if DUMP_PROPERTYMAP_STATS
     ++propertyMapHashTableStats->numLookups;
@@ -327,9 +327,7 @@
         ++propertyMapHashTableStats->numLookupProbing;
 #endif
 
-        if (!step)
-            step = WTF::doubleHash(IdentifierRepHash::hash(key)) | 1;
-        hash += step;
+        hash++;
     }
 }
 

Modified: trunk/Source/_javascript_Core/runtime/Structure.h (199068 => 199069)


--- trunk/Source/_javascript_Core/runtime/Structure.h	2016-04-05 19:40:07 UTC (rev 199068)
+++ trunk/Source/_javascript_Core/runtime/Structure.h	2016-04-05 19:58:04 UTC (rev 199069)
@@ -420,6 +420,11 @@
     {
         return OBJECT_OFFSETOF(Structure, m_blob) + StructureIDBlob::indexingTypeOffset();
     }
+    
+    static ptrdiff_t propertyTableUnsafeOffset()
+    {
+        return OBJECT_OFFSETOF(Structure, m_propertyTableUnsafe);
+    }
 
     static Structure* createStructure(VM&);
         
_______________________________________________
webkit-changes mailing list
webkit-changes@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-changes

Reply via email to