Title: [202214] trunk/Source/_javascript_Core
Revision
202214
Author
sbar...@apple.com
Date
2016-06-19 12:42:18 -0700 (Sun, 19 Jun 2016)

Log Message

We should be able to generate more types of ICs inline
https://bugs.webkit.org/show_bug.cgi?id=158719
<rdar://problem/26825641>

Reviewed by Filip Pizlo.

This patch changes how we emit code for *byId ICs inline.
We no longer keep data labels to patch structure checks, etc.
Instead, we just regenerate the entire IC into a designated
region of code that the Baseline/DFG/FTL JIT will emit inline.
This makes it much simpler to patch inline ICs. All that's
needed to patch an inline IC is to memcpy the code from
a macro assembler inline using LinkBuffer. This architecture
will be easy to extend into other forms of ICs, such as one
for add, in the future.

To support this change, I've reworked the fields inside
StructureStubInfo. It now has one field that is the CodeLocationLabel 
of the start of the inline IC. Then it has a few ints that track deltas
to other locations in the IC such as the slow path start, slow path call, the
ICs 'done' location. We used to perform math on these ints in a bunch of different
places. I've consolidated that math into methods inside StructureStubInfo.

To generate inline ICs, I've implemented a new class called InlineAccess.
InlineAccess is stateless: it just has a bunch of static methods for
generating code into the inline region specified by StructureStubInfo.
Repatch will now decide when it wants to generate such an inline
IC, and it will ask InlineAccess to do so.

I've implemented three types of inline ICs to begin with (extending
this in the future should be easy):
- Self property loads (both inline and out of line offsets).
- Self property replace (both inline and out of line offsets).
- Array length on specific array types.
(An easy extension would be to implement JSString length.)

To know how much inline space to reserve, I've implemented a
method that stubs out the various inline cache shapes and 
dumps their size. This is used to determine how much space
to save inline. When InlineAccess ends up generating more
code than can fit inline, we will fall back to generating
code with PolymorphicAccess instead.

To make generating code into already allocated executable memory
efficient, I've made AssemblerData have 128 bytes of inline storage.
This saves us a malloc when splatting code into the inline region.

This patch also tidies up LinkBuffer's API for generating
into already allocated executable memory. Now, when generating
code that has less size than the already allocated space, LinkBuffer
will fill the extra space with nops. Also, if branch compaction shrinks
the code, LinkBuffer will add a nop sled at the end of the shrunken
code to take up the entire allocated size.

This looks like it could be a 1% octane progression.

* CMakeLists.txt:
* _javascript_Core.xcodeproj/project.pbxproj:
* assembler/ARM64Assembler.h:
(JSC::ARM64Assembler::nop):
(JSC::ARM64Assembler::fillNops):
* assembler/ARMv7Assembler.h:
(JSC::ARMv7Assembler::nopw):
(JSC::ARMv7Assembler::nopPseudo16):
(JSC::ARMv7Assembler::nopPseudo32):
(JSC::ARMv7Assembler::fillNops):
(JSC::ARMv7Assembler::dmbSY):
* assembler/AbstractMacroAssembler.h:
(JSC::AbstractMacroAssembler::addLinkTask):
(JSC::AbstractMacroAssembler::emitNops):
(JSC::AbstractMacroAssembler::AbstractMacroAssembler):
* assembler/AssemblerBuffer.h:
(JSC::AssemblerData::AssemblerData):
(JSC::AssemblerData::operator=):
(JSC::AssemblerData::~AssemblerData):
(JSC::AssemblerData::buffer):
(JSC::AssemblerData::grow):
(JSC::AssemblerData::isInlineBuffer):
(JSC::AssemblerBuffer::AssemblerBuffer):
(JSC::AssemblerBuffer::ensureSpace):
(JSC::AssemblerBuffer::codeSize):
(JSC::AssemblerBuffer::setCodeSize):
(JSC::AssemblerBuffer::label):
(JSC::AssemblerBuffer::debugOffset):
(JSC::AssemblerBuffer::releaseAssemblerData):
* assembler/LinkBuffer.cpp:
(JSC::LinkBuffer::copyCompactAndLinkCode):
(JSC::LinkBuffer::linkCode):
(JSC::LinkBuffer::allocate):
(JSC::LinkBuffer::performFinalization):
(JSC::LinkBuffer::shrink): Deleted.
* assembler/LinkBuffer.h:
(JSC::LinkBuffer::LinkBuffer):
(JSC::LinkBuffer::debugAddress):
(JSC::LinkBuffer::size):
(JSC::LinkBuffer::wasAlreadyDisassembled):
(JSC::LinkBuffer::didAlreadyDisassemble):
(JSC::LinkBuffer::applyOffset):
(JSC::LinkBuffer::code):
* assembler/MacroAssemblerARM64.h:
(JSC::MacroAssemblerARM64::patchableBranch32):
(JSC::MacroAssemblerARM64::patchableBranch64):
* assembler/MacroAssemblerARMv7.h:
(JSC::MacroAssemblerARMv7::patchableBranch32):
(JSC::MacroAssemblerARMv7::patchableBranchPtrWithPatch):
* assembler/X86Assembler.h:
(JSC::X86Assembler::nop):
(JSC::X86Assembler::fillNops):
* bytecode/CodeBlock.cpp:
(JSC::CodeBlock::printGetByIdCacheStatus):
* bytecode/InlineAccess.cpp: Added.
(JSC::InlineAccess::dumpCacheSizesAndCrash):
(JSC::linkCodeInline):
(JSC::InlineAccess::generateSelfPropertyAccess):
(JSC::getScratchRegister):
(JSC::hasFreeRegister):
(JSC::InlineAccess::canGenerateSelfPropertyReplace):
(JSC::InlineAccess::generateSelfPropertyReplace):
(JSC::InlineAccess::isCacheableArrayLength):
(JSC::InlineAccess::generateArrayLength):
(JSC::InlineAccess::rewireStubAsJump):
* bytecode/InlineAccess.h: Added.
(JSC::InlineAccess::sizeForPropertyAccess):
(JSC::InlineAccess::sizeForPropertyReplace):
(JSC::InlineAccess::sizeForLengthAccess):
* bytecode/PolymorphicAccess.cpp:
(JSC::PolymorphicAccess::regenerate):
* bytecode/StructureStubInfo.cpp:
(JSC::StructureStubInfo::initGetByIdSelf):
(JSC::StructureStubInfo::initArrayLength):
(JSC::StructureStubInfo::initPutByIdReplace):
(JSC::StructureStubInfo::deref):
(JSC::StructureStubInfo::aboutToDie):
(JSC::StructureStubInfo::propagateTransitions):
(JSC::StructureStubInfo::containsPC):
* bytecode/StructureStubInfo.h:
(JSC::StructureStubInfo::considerCaching):
(JSC::StructureStubInfo::slowPathCallLocation):
(JSC::StructureStubInfo::doneLocation):
(JSC::StructureStubInfo::slowPathStartLocation):
(JSC::StructureStubInfo::patchableJumpForIn):
(JSC::StructureStubInfo::valueRegs):
* dfg/DFGJITCompiler.cpp:
(JSC::DFG::JITCompiler::link):
* dfg/DFGOSRExitCompilerCommon.cpp:
(JSC::DFG::reifyInlinedCallFrames):
* dfg/DFGSpeculativeJIT32_64.cpp:
(JSC::DFG::SpeculativeJIT::cachedGetById):
* dfg/DFGSpeculativeJIT64.cpp:
(JSC::DFG::SpeculativeJIT::cachedGetById):
* ftl/FTLLowerDFGToB3.cpp:
(JSC::FTL::DFG::LowerDFGToB3::compileIn):
(JSC::FTL::DFG::LowerDFGToB3::getById):
* jit/JITInlineCacheGenerator.cpp:
(JSC::JITByIdGenerator::finalize):
(JSC::JITByIdGenerator::generateFastCommon):
(JSC::JITGetByIdGenerator::JITGetByIdGenerator):
(JSC::JITGetByIdGenerator::generateFastPath):
(JSC::JITPutByIdGenerator::JITPutByIdGenerator):
(JSC::JITPutByIdGenerator::generateFastPath):
(JSC::JITPutByIdGenerator::slowPathFunction):
(JSC::JITByIdGenerator::generateFastPathChecks): Deleted.
* jit/JITInlineCacheGenerator.h:
(JSC::JITByIdGenerator::reportSlowPathCall):
(JSC::JITByIdGenerator::slowPathBegin):
(JSC::JITByIdGenerator::slowPathJump):
(JSC::JITGetByIdGenerator::JITGetByIdGenerator):
* jit/JITPropertyAccess.cpp:
(JSC::JIT::emitGetByValWithCachedId):
(JSC::JIT::emit_op_try_get_by_id):
(JSC::JIT::emit_op_get_by_id):
* jit/JITPropertyAccess32_64.cpp:
(JSC::JIT::emitGetByValWithCachedId):
(JSC::JIT::emit_op_try_get_by_id):
(JSC::JIT::emit_op_get_by_id):
* jit/Repatch.cpp:
(JSC::repatchCall):
(JSC::tryCacheGetByID):
(JSC::repatchGetByID):
(JSC::appropriateGenericPutByIdFunction):
(JSC::tryCachePutByID):
(JSC::repatchPutByID):
(JSC::tryRepatchIn):
(JSC::repatchIn):
(JSC::linkSlowFor):
(JSC::resetGetByID):
(JSC::resetPutByID):
(JSC::resetIn):
(JSC::repatchByIdSelfAccess): Deleted.
(JSC::resetGetByIDCheckAndLoad): Deleted.
(JSC::resetPutByIDCheckAndLoad): Deleted.
(JSC::replaceWithJump): Deleted.

Modified Paths

Added Paths

Diff

Modified: trunk/Source/_javascript_Core/CMakeLists.txt (202213 => 202214)


--- trunk/Source/_javascript_Core/CMakeLists.txt	2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/_javascript_Core/CMakeLists.txt	2016-06-19 19:42:18 UTC (rev 202214)
@@ -200,6 +200,7 @@
     bytecode/ExitingJITType.cpp
     bytecode/GetByIdStatus.cpp
     bytecode/GetByIdVariant.cpp
+    bytecode/InlineAccess.cpp
     bytecode/InlineCallFrame.cpp
     bytecode/InlineCallFrameSet.cpp
     bytecode/JumpTable.cpp

Modified: trunk/Source/_javascript_Core/ChangeLog (202213 => 202214)


--- trunk/Source/_javascript_Core/ChangeLog	2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/_javascript_Core/ChangeLog	2016-06-19 19:42:18 UTC (rev 202214)
@@ -1,3 +1,198 @@
+2016-06-19  Saam Barati  <sbar...@apple.com>
+
+        We should be able to generate more types of ICs inline
+        https://bugs.webkit.org/show_bug.cgi?id=158719
+        <rdar://problem/26825641>
+
+        Reviewed by Filip Pizlo.
+
+        This patch changes how we emit code for *byId ICs inline.
+        We no longer keep data labels to patch structure checks, etc.
+        Instead, we just regenerate the entire IC into a designated
+        region of code that the Baseline/DFG/FTL JIT will emit inline.
+        This makes it much simpler to patch inline ICs. All that's
+        needed to patch an inline IC is to memcpy the code from
+        a macro assembler inline using LinkBuffer. This architecture
+        will be easy to extend into other forms of ICs, such as one
+        for add, in the future.
+
+        To support this change, I've reworked the fields inside
+        StructureStubInfo. It now has one field that is the CodeLocationLabel 
+        of the start of the inline IC. Then it has a few ints that track deltas
+        to other locations in the IC such as the slow path start, slow path call, the
+        ICs 'done' location. We used to perform math on these ints in a bunch of different
+        places. I've consolidated that math into methods inside StructureStubInfo.
+
+        To generate inline ICs, I've implemented a new class called InlineAccess.
+        InlineAccess is stateless: it just has a bunch of static methods for
+        generating code into the inline region specified by StructureStubInfo.
+        Repatch will now decide when it wants to generate such an inline
+        IC, and it will ask InlineAccess to do so.
+
+        I've implemented three types of inline ICs to begin with (extending
+        this in the future should be easy):
+        - Self property loads (both inline and out of line offsets).
+        - Self property replace (both inline and out of line offsets).
+        - Array length on specific array types.
+        (An easy extension would be to implement JSString length.)
+
+        To know how much inline space to reserve, I've implemented a
+        method that stubs out the various inline cache shapes and 
+        dumps their size. This is used to determine how much space
+        to save inline. When InlineAccess ends up generating more
+        code than can fit inline, we will fall back to generating
+        code with PolymorphicAccess instead.
+
+        To make generating code into already allocated executable memory
+        efficient, I've made AssemblerData have 128 bytes of inline storage.
+        This saves us a malloc when splatting code into the inline region.
+
+        This patch also tidies up LinkBuffer's API for generating
+        into already allocated executable memory. Now, when generating
+        code that has less size than the already allocated space, LinkBuffer
+        will fill the extra space with nops. Also, if branch compaction shrinks
+        the code, LinkBuffer will add a nop sled at the end of the shrunken
+        code to take up the entire allocated size.
+
+        This looks like it could be a 1% octane progression.
+
+        * CMakeLists.txt:
+        * _javascript_Core.xcodeproj/project.pbxproj:
+        * assembler/ARM64Assembler.h:
+        (JSC::ARM64Assembler::nop):
+        (JSC::ARM64Assembler::fillNops):
+        * assembler/ARMv7Assembler.h:
+        (JSC::ARMv7Assembler::nopw):
+        (JSC::ARMv7Assembler::nopPseudo16):
+        (JSC::ARMv7Assembler::nopPseudo32):
+        (JSC::ARMv7Assembler::fillNops):
+        (JSC::ARMv7Assembler::dmbSY):
+        * assembler/AbstractMacroAssembler.h:
+        (JSC::AbstractMacroAssembler::addLinkTask):
+        (JSC::AbstractMacroAssembler::emitNops):
+        (JSC::AbstractMacroAssembler::AbstractMacroAssembler):
+        * assembler/AssemblerBuffer.h:
+        (JSC::AssemblerData::AssemblerData):
+        (JSC::AssemblerData::operator=):
+        (JSC::AssemblerData::~AssemblerData):
+        (JSC::AssemblerData::buffer):
+        (JSC::AssemblerData::grow):
+        (JSC::AssemblerData::isInlineBuffer):
+        (JSC::AssemblerBuffer::AssemblerBuffer):
+        (JSC::AssemblerBuffer::ensureSpace):
+        (JSC::AssemblerBuffer::codeSize):
+        (JSC::AssemblerBuffer::setCodeSize):
+        (JSC::AssemblerBuffer::label):
+        (JSC::AssemblerBuffer::debugOffset):
+        (JSC::AssemblerBuffer::releaseAssemblerData):
+        * assembler/LinkBuffer.cpp:
+        (JSC::LinkBuffer::copyCompactAndLinkCode):
+        (JSC::LinkBuffer::linkCode):
+        (JSC::LinkBuffer::allocate):
+        (JSC::LinkBuffer::performFinalization):
+        (JSC::LinkBuffer::shrink): Deleted.
+        * assembler/LinkBuffer.h:
+        (JSC::LinkBuffer::LinkBuffer):
+        (JSC::LinkBuffer::debugAddress):
+        (JSC::LinkBuffer::size):
+        (JSC::LinkBuffer::wasAlreadyDisassembled):
+        (JSC::LinkBuffer::didAlreadyDisassemble):
+        (JSC::LinkBuffer::applyOffset):
+        (JSC::LinkBuffer::code):
+        * assembler/MacroAssemblerARM64.h:
+        (JSC::MacroAssemblerARM64::patchableBranch32):
+        (JSC::MacroAssemblerARM64::patchableBranch64):
+        * assembler/MacroAssemblerARMv7.h:
+        (JSC::MacroAssemblerARMv7::patchableBranch32):
+        (JSC::MacroAssemblerARMv7::patchableBranchPtrWithPatch):
+        * assembler/X86Assembler.h:
+        (JSC::X86Assembler::nop):
+        (JSC::X86Assembler::fillNops):
+        * bytecode/CodeBlock.cpp:
+        (JSC::CodeBlock::printGetByIdCacheStatus):
+        * bytecode/InlineAccess.cpp: Added.
+        (JSC::InlineAccess::dumpCacheSizesAndCrash):
+        (JSC::linkCodeInline):
+        (JSC::InlineAccess::generateSelfPropertyAccess):
+        (JSC::getScratchRegister):
+        (JSC::hasFreeRegister):
+        (JSC::InlineAccess::canGenerateSelfPropertyReplace):
+        (JSC::InlineAccess::generateSelfPropertyReplace):
+        (JSC::InlineAccess::isCacheableArrayLength):
+        (JSC::InlineAccess::generateArrayLength):
+        (JSC::InlineAccess::rewireStubAsJump):
+        * bytecode/InlineAccess.h: Added.
+        (JSC::InlineAccess::sizeForPropertyAccess):
+        (JSC::InlineAccess::sizeForPropertyReplace):
+        (JSC::InlineAccess::sizeForLengthAccess):
+        * bytecode/PolymorphicAccess.cpp:
+        (JSC::PolymorphicAccess::regenerate):
+        * bytecode/StructureStubInfo.cpp:
+        (JSC::StructureStubInfo::initGetByIdSelf):
+        (JSC::StructureStubInfo::initArrayLength):
+        (JSC::StructureStubInfo::initPutByIdReplace):
+        (JSC::StructureStubInfo::deref):
+        (JSC::StructureStubInfo::aboutToDie):
+        (JSC::StructureStubInfo::propagateTransitions):
+        (JSC::StructureStubInfo::containsPC):
+        * bytecode/StructureStubInfo.h:
+        (JSC::StructureStubInfo::considerCaching):
+        (JSC::StructureStubInfo::slowPathCallLocation):
+        (JSC::StructureStubInfo::doneLocation):
+        (JSC::StructureStubInfo::slowPathStartLocation):
+        (JSC::StructureStubInfo::patchableJumpForIn):
+        (JSC::StructureStubInfo::valueRegs):
+        * dfg/DFGJITCompiler.cpp:
+        (JSC::DFG::JITCompiler::link):
+        * dfg/DFGOSRExitCompilerCommon.cpp:
+        (JSC::DFG::reifyInlinedCallFrames):
+        * dfg/DFGSpeculativeJIT32_64.cpp:
+        (JSC::DFG::SpeculativeJIT::cachedGetById):
+        * dfg/DFGSpeculativeJIT64.cpp:
+        (JSC::DFG::SpeculativeJIT::cachedGetById):
+        * ftl/FTLLowerDFGToB3.cpp:
+        (JSC::FTL::DFG::LowerDFGToB3::compileIn):
+        (JSC::FTL::DFG::LowerDFGToB3::getById):
+        * jit/JITInlineCacheGenerator.cpp:
+        (JSC::JITByIdGenerator::finalize):
+        (JSC::JITByIdGenerator::generateFastCommon):
+        (JSC::JITGetByIdGenerator::JITGetByIdGenerator):
+        (JSC::JITGetByIdGenerator::generateFastPath):
+        (JSC::JITPutByIdGenerator::JITPutByIdGenerator):
+        (JSC::JITPutByIdGenerator::generateFastPath):
+        (JSC::JITPutByIdGenerator::slowPathFunction):
+        (JSC::JITByIdGenerator::generateFastPathChecks): Deleted.
+        * jit/JITInlineCacheGenerator.h:
+        (JSC::JITByIdGenerator::reportSlowPathCall):
+        (JSC::JITByIdGenerator::slowPathBegin):
+        (JSC::JITByIdGenerator::slowPathJump):
+        (JSC::JITGetByIdGenerator::JITGetByIdGenerator):
+        * jit/JITPropertyAccess.cpp:
+        (JSC::JIT::emitGetByValWithCachedId):
+        (JSC::JIT::emit_op_try_get_by_id):
+        (JSC::JIT::emit_op_get_by_id):
+        * jit/JITPropertyAccess32_64.cpp:
+        (JSC::JIT::emitGetByValWithCachedId):
+        (JSC::JIT::emit_op_try_get_by_id):
+        (JSC::JIT::emit_op_get_by_id):
+        * jit/Repatch.cpp:
+        (JSC::repatchCall):
+        (JSC::tryCacheGetByID):
+        (JSC::repatchGetByID):
+        (JSC::appropriateGenericPutByIdFunction):
+        (JSC::tryCachePutByID):
+        (JSC::repatchPutByID):
+        (JSC::tryRepatchIn):
+        (JSC::repatchIn):
+        (JSC::linkSlowFor):
+        (JSC::resetGetByID):
+        (JSC::resetPutByID):
+        (JSC::resetIn):
+        (JSC::repatchByIdSelfAccess): Deleted.
+        (JSC::resetGetByIDCheckAndLoad): Deleted.
+        (JSC::resetPutByIDCheckAndLoad): Deleted.
+        (JSC::replaceWithJump): Deleted.
+
 2016-06-19  Filip Pizlo  <fpi...@apple.com>
 
         REGRESSION(concurrent baseline JIT): Kraken/ai-astar runs 20% slower

Modified: trunk/Source/_javascript_Core/_javascript_Core.xcodeproj/project.pbxproj (202213 => 202214)


--- trunk/Source/_javascript_Core/_javascript_Core.xcodeproj/project.pbxproj	2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/_javascript_Core/_javascript_Core.xcodeproj/project.pbxproj	2016-06-19 19:42:18 UTC (rev 202214)
@@ -1273,6 +1273,8 @@
 		70ECA6091AFDBEA200449739 /* TemplateRegistryKey.h in Headers */ = {isa = PBXBuildFile; fileRef = 70ECA6041AFDBEA200449739 /* TemplateRegistryKey.h */; settings = {ATTRIBUTES = (Private, ); }; };
 		72AAF7CD1D0D31B3005E60BE /* JSCustomGetterSetterFunction.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 72AAF7CB1D0D318B005E60BE /* JSCustomGetterSetterFunction.cpp */; };
 		72AAF7CE1D0D31B3005E60BE /* JSCustomGetterSetterFunction.h in Headers */ = {isa = PBXBuildFile; fileRef = 72AAF7CC1D0D318B005E60BE /* JSCustomGetterSetterFunction.h */; };
+		7905BB681D12050E0019FE57 /* InlineAccess.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 7905BB661D12050E0019FE57 /* InlineAccess.cpp */; };
+		7905BB691D12050E0019FE57 /* InlineAccess.h in Headers */ = {isa = PBXBuildFile; fileRef = 7905BB671D12050E0019FE57 /* InlineAccess.h */; settings = {ATTRIBUTES = (Private, ); }; };
 		79160DBD1C8E3EC8008C085A /* ProxyRevoke.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 79160DBB1C8E3EC8008C085A /* ProxyRevoke.cpp */; };
 		79160DBE1C8E3EC8008C085A /* ProxyRevoke.h in Headers */ = {isa = PBXBuildFile; fileRef = 79160DBC1C8E3EC8008C085A /* ProxyRevoke.h */; settings = {ATTRIBUTES = (Private, ); }; };
 		792CB3491C4EED5C00D13AF3 /* PCToCodeOriginMap.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 792CB3471C4EED5C00D13AF3 /* PCToCodeOriginMap.cpp */; };
@@ -3457,6 +3459,8 @@
 		70ECA6041AFDBEA200449739 /* TemplateRegistryKey.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = TemplateRegistryKey.h; sourceTree = "<group>"; };
 		72AAF7CB1D0D318B005E60BE /* JSCustomGetterSetterFunction.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = JSCustomGetterSetterFunction.cpp; sourceTree = "<group>"; };
 		72AAF7CC1D0D318B005E60BE /* JSCustomGetterSetterFunction.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JSCustomGetterSetterFunction.h; sourceTree = "<group>"; };
+		7905BB661D12050E0019FE57 /* InlineAccess.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = InlineAccess.cpp; sourceTree = "<group>"; };
+		7905BB671D12050E0019FE57 /* InlineAccess.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = InlineAccess.h; sourceTree = "<group>"; };
 		79160DBB1C8E3EC8008C085A /* ProxyRevoke.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = ProxyRevoke.cpp; sourceTree = "<group>"; };
 		79160DBC1C8E3EC8008C085A /* ProxyRevoke.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ProxyRevoke.h; sourceTree = "<group>"; };
 		792CB3471C4EED5C00D13AF3 /* PCToCodeOriginMap.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = PCToCodeOriginMap.cpp; sourceTree = "<group>"; };
@@ -6586,6 +6590,8 @@
 				0F0332C118B01763005F979A /* GetByIdVariant.cpp */,
 				0F0332C218B01763005F979A /* GetByIdVariant.h */,
 				0F0B83A814BCF55E00885B4F /* HandlerInfo.h */,
+				7905BB661D12050E0019FE57 /* InlineAccess.cpp */,
+				7905BB671D12050E0019FE57 /* InlineAccess.h */,
 				148A7BED1B82975A002D9157 /* InlineCallFrame.cpp */,
 				148A7BEE1B82975A002D9157 /* InlineCallFrame.h */,
 				0F24E55317F0B71C00ABB217 /* InlineCallFrameSet.cpp */,
@@ -7330,6 +7336,7 @@
 				0F2B9CE919D0BA7D00B1D1B5 /* DFGObjectMaterializationData.h in Headers */,
 				43C392AB1C3BEB0500241F53 /* AssemblerCommon.h in Headers */,
 				86EC9DD01328DF82002B2AD7 /* DFGOperations.h in Headers */,
+				7905BB691D12050E0019FE57 /* InlineAccess.h in Headers */,
 				A7D89CFE17A0B8CC00773AD8 /* DFGOSRAvailabilityAnalysisPhase.h in Headers */,
 				0FD82E57141DAF1000179C94 /* DFGOSREntry.h in Headers */,
 				0F40E4A71C497F7400A577FA /* AirOpcode.h in Headers */,
@@ -9315,6 +9322,7 @@
 				0F6B8AD81C4EDDA200969052 /* B3DuplicateTails.cpp in Sources */,
 				527773DE1AAF83AC00BDE7E8 /* RuntimeType.cpp in Sources */,
 				0F7700921402FF3C0078EB39 /* SamplingCounter.cpp in Sources */,
+				7905BB681D12050E0019FE57 /* InlineAccess.cpp in Sources */,
 				0FE050271AA9095600D33B33 /* ScopedArguments.cpp in Sources */,
 				0FE0502F1AAA806900D33B33 /* ScopedArgumentsTable.cpp in Sources */,
 				992ABCF91BEA9BD2006403A0 /* RemoteAutomationTarget.cpp in Sources */,

Modified: trunk/Source/_javascript_Core/assembler/ARM64Assembler.h (202213 => 202214)


--- trunk/Source/_javascript_Core/assembler/ARM64Assembler.h	2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/_javascript_Core/assembler/ARM64Assembler.h	2016-06-19 19:42:18 UTC (rev 202214)
@@ -1484,13 +1484,16 @@
         insn(nopPseudo());
     }
     
-    static void fillNops(void* base, size_t size)
+    static void fillNops(void* base, size_t size, bool isCopyingToExecutableMemory)
     {
         RELEASE_ASSERT(!(size % sizeof(int32_t)));
         size_t n = size / sizeof(int32_t);
         for (int32_t* ptr = static_cast<int32_t*>(base); n--;) {
             int insn = nopPseudo();
-            performJITMemcpy(ptr++, &insn, sizeof(int));
+            if (isCopyingToExecutableMemory)
+                performJITMemcpy(ptr++, &insn, sizeof(int));
+            else
+                memcpy(ptr++, &insn, sizeof(int));
         }
     }
     

Modified: trunk/Source/_javascript_Core/assembler/ARMv7Assembler.h (202213 => 202214)


--- trunk/Source/_javascript_Core/assembler/ARMv7Assembler.h	2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/_javascript_Core/assembler/ARMv7Assembler.h	2016-06-19 19:42:18 UTC (rev 202214)
@@ -2004,6 +2004,43 @@
         m_formatter.twoWordOp16Op16(OP_NOP_T2a, OP_NOP_T2b);
     }
     
+    static constexpr int16_t nopPseudo16()
+    {
+        return OP_NOP_T1;
+    }
+
+    static constexpr int32_t nopPseudo32()
+    {
+        return OP_NOP_T2a | (OP_NOP_T2b << 16);
+    }
+
+    static void fillNops(void* base, size_t size, bool isCopyingToExecutableMemory)
+    {
+        RELEASE_ASSERT(!(size % sizeof(int16_t)));
+
+        char* ptr = static_cast<char*>(base);
+        const size_t num32s = size / sizeof(int32_t);
+        for (size_t i = 0; i < num32s; i++) {
+            const int32_t insn = nopPseudo32();
+            if (isCopyingToExecutableMemory)
+                performJITMemcpy(ptr, &insn, sizeof(int32_t));
+            else
+                memcpy(ptr, &insn, sizeof(int32_t));
+            ptr += sizeof(int32_t);
+        }
+
+        const size_t num16s = (size % sizeof(int32_t)) / sizeof(int16_t);
+        ASSERT(num16s == 0 || num16s == 1);
+        ASSERT(num16s * sizeof(int16_t) + num32s * sizeof(int32_t) == size);
+        if (num16s) {
+            const int16_t insn = nopPseudo16();
+            if (isCopyingToExecutableMemory)
+                performJITMemcpy(ptr, &insn, sizeof(int16_t));
+            else
+                memcpy(ptr, &insn, sizeof(int16_t));
+        }
+    }
+
     void dmbSY()
     {
         m_formatter.twoWordOp16Op16(OP_DMB_SY_T2a, OP_DMB_SY_T2b);

Modified: trunk/Source/_javascript_Core/assembler/AbstractMacroAssembler.h (202213 => 202214)


--- trunk/Source/_javascript_Core/assembler/AbstractMacroAssembler.h	2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/_javascript_Core/assembler/AbstractMacroAssembler.h	2016-06-19 19:42:18 UTC (rev 202214)
@@ -27,7 +27,6 @@
 #define AbstractMacroAssembler_h
 
 #include "AbortReason.h"
-#include "AssemblerBuffer.h"
 #include "CodeLocation.h"
 #include "MacroAssemblerCodeRef.h"
 #include "Options.h"
@@ -1040,6 +1039,17 @@
         m_linkTasks.append(createSharedTask<void(LinkBuffer&)>(functor));
     }
 
+    void emitNops(size_t memoryToFillWithNopsInBytes)
+    {
+        AssemblerBuffer& buffer = m_assembler.buffer();
+        size_t startCodeSize = buffer.codeSize();
+        size_t targetCodeSize = startCodeSize + memoryToFillWithNopsInBytes;
+        buffer.ensureSpace(memoryToFillWithNopsInBytes);
+        bool isCopyingToExecutableMemory = false;
+        AssemblerType::fillNops(static_cast<char*>(buffer.data()) + startCodeSize, memoryToFillWithNopsInBytes, isCopyingToExecutableMemory);
+        buffer.setCodeSize(targetCodeSize);
+    }
+
 protected:
     AbstractMacroAssembler()
         : m_randomSource(0)

Modified: trunk/Source/_javascript_Core/assembler/AssemblerBuffer.h (202213 => 202214)


--- trunk/Source/_javascript_Core/assembler/AssemblerBuffer.h	2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/_javascript_Core/assembler/AssemblerBuffer.h	2016-06-19 19:42:18 UTC (rev 202214)
@@ -62,32 +62,54 @@
     };
 
     class AssemblerData {
+        WTF_MAKE_NONCOPYABLE(AssemblerData);
+        static const size_t InlineCapacity = 128;
     public:
         AssemblerData()
-            : m_buffer(nullptr)
-            , m_capacity(0)
+            : m_buffer(m_inlineBuffer)
+            , m_capacity(InlineCapacity)
         {
         }
 
-        AssemblerData(unsigned initialCapacity)
+        AssemblerData(size_t initialCapacity)
         {
-            m_capacity = initialCapacity;
-            m_buffer = static_cast<char*>(fastMalloc(m_capacity));
+            if (initialCapacity <= InlineCapacity) {
+                m_capacity = InlineCapacity;
+                m_buffer = m_inlineBuffer;
+            } else {
+                m_capacity = initialCapacity;
+                m_buffer = static_cast<char*>(fastMalloc(m_capacity));
+            }
         }
 
         AssemblerData(AssemblerData&& other)
         {
-            m_buffer = other.m_buffer;
+            if (other.isInlineBuffer()) {
+                ASSERT(other.m_capacity == InlineCapacity);
+                memcpy(m_inlineBuffer, other.m_inlineBuffer, InlineCapacity);
+                m_buffer = m_inlineBuffer;
+            } else
+                m_buffer = other.m_buffer;
+            m_capacity = other.m_capacity;
+
             other.m_buffer = nullptr;
-            m_capacity = other.m_capacity;
             other.m_capacity = 0;
         }
 
         AssemblerData& operator=(AssemblerData&& other)
         {
-            m_buffer = other.m_buffer;
+            if (m_buffer && !isInlineBuffer())
+                fastFree(m_buffer);
+
+            if (other.isInlineBuffer()) {
+                ASSERT(other.m_capacity == InlineCapacity);
+                memcpy(m_inlineBuffer, other.m_inlineBuffer, InlineCapacity);
+                m_buffer = m_inlineBuffer;
+            } else
+                m_buffer = other.m_buffer;
+            m_capacity = other.m_capacity;
+
             other.m_buffer = nullptr;
-            m_capacity = other.m_capacity;
             other.m_capacity = 0;
             return *this;
         }
@@ -94,7 +116,8 @@
 
         ~AssemblerData()
         {
-            fastFree(m_buffer);
+            if (m_buffer && !isInlineBuffer())
+                fastFree(m_buffer);
         }
 
         char* buffer() const { return m_buffer; }
@@ -104,19 +127,24 @@
         void grow(unsigned extraCapacity = 0)
         {
             m_capacity = m_capacity + m_capacity / 2 + extraCapacity;
-            m_buffer = static_cast<char*>(fastRealloc(m_buffer, m_capacity));
+            if (isInlineBuffer()) {
+                m_buffer = static_cast<char*>(fastMalloc(m_capacity));
+                memcpy(m_buffer, m_inlineBuffer, InlineCapacity);
+            } else
+                m_buffer = static_cast<char*>(fastRealloc(m_buffer, m_capacity));
         }
 
     private:
+        bool isInlineBuffer() const { return m_buffer == m_inlineBuffer; }
         char* m_buffer;
+        char m_inlineBuffer[InlineCapacity];
         unsigned m_capacity;
     };
 
     class AssemblerBuffer {
-        static const int initialCapacity = 128;
     public:
         AssemblerBuffer()
-            : m_storage(initialCapacity)
+            : m_storage()
             , m_index(0)
         {
         }
@@ -128,7 +156,7 @@
 
         void ensureSpace(unsigned space)
         {
-            if (!isAvailable(space))
+            while (!isAvailable(space))
                 outOfLineGrow();
         }
 
@@ -156,6 +184,15 @@
             return m_index;
         }
 
+        void setCodeSize(size_t index)
+        {
+            // Warning: Only use this if you know exactly what you are doing.
+            // For example, say you want 40 bytes of nops, it's ok to grow
+            // and then fill 40 bytes of nops using bigger instructions.
+            m_index = index;
+            ASSERT(m_index <= m_storage.capacity());
+        }
+
         AssemblerLabel label() const
         {
             return AssemblerLabel(m_index);
@@ -163,7 +200,7 @@
 
         unsigned debugOffset() { return m_index; }
 
-        AssemblerData releaseAssemblerData() { return WTFMove(m_storage); }
+        AssemblerData&& releaseAssemblerData() { return WTFMove(m_storage); }
 
         // LocalWriter is a trick to keep the storage buffer and the index
         // in memory while issuing multiple Stores.

Modified: trunk/Source/_javascript_Core/assembler/LinkBuffer.cpp (202213 => 202214)


--- trunk/Source/_javascript_Core/assembler/LinkBuffer.cpp	2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/_javascript_Core/assembler/LinkBuffer.cpp	2016-06-19 19:42:18 UTC (rev 202214)
@@ -98,15 +98,17 @@
 template <typename InstructionType>
 void LinkBuffer::copyCompactAndLinkCode(MacroAssembler& macroAssembler, void* ownerUID, JITCompilationEffort effort)
 {
-    m_initialSize = macroAssembler.m_assembler.codeSize();
-    allocate(m_initialSize, ownerUID, effort);
+    allocate(macroAssembler, ownerUID, effort);
+    const size_t initialSize = macroAssembler.m_assembler.codeSize();
     if (didFailToAllocate())
         return;
+
     Vector<LinkRecord, 0, UnsafeVectorOverflow>& jumpsToLink = macroAssembler.jumpsToLink();
     m_assemblerStorage = macroAssembler.m_assembler.buffer().releaseAssemblerData();
     uint8_t* inData = reinterpret_cast<uint8_t*>(m_assemblerStorage.buffer());
 
     AssemblerData outBuffer(m_size);
+
     uint8_t* outData = reinterpret_cast<uint8_t*>(outBuffer.buffer());
     uint8_t* codeOutData = reinterpret_cast<uint8_t*>(m_code);
 
@@ -113,47 +115,54 @@
     int readPtr = 0;
     int writePtr = 0;
     unsigned jumpCount = jumpsToLink.size();
-    for (unsigned i = 0; i < jumpCount; ++i) {
-        int offset = readPtr - writePtr;
-        ASSERT(!(offset & 1));
-            
-        // Copy the instructions from the last jump to the current one.
-        size_t regionSize = jumpsToLink[i].from() - readPtr;
-        InstructionType* copySource = reinterpret_cast_ptr<InstructionType*>(inData + readPtr);
-        InstructionType* copyEnd = reinterpret_cast_ptr<InstructionType*>(inData + readPtr + regionSize);
-        InstructionType* copyDst = reinterpret_cast_ptr<InstructionType*>(outData + writePtr);
-        ASSERT(!(regionSize % 2));
-        ASSERT(!(readPtr % 2));
-        ASSERT(!(writePtr % 2));
-        while (copySource != copyEnd)
-            *copyDst++ = *copySource++;
-        recordLinkOffsets(m_assemblerStorage, readPtr, jumpsToLink[i].from(), offset);
-        readPtr += regionSize;
-        writePtr += regionSize;
-            
-        // Calculate absolute address of the jump target, in the case of backwards
-        // branches we need to be precise, forward branches we are pessimistic
-        const uint8_t* target;
-        if (jumpsToLink[i].to() >= jumpsToLink[i].from())
-            target = codeOutData + jumpsToLink[i].to() - offset; // Compensate for what we have collapsed so far
-        else
-            target = codeOutData + jumpsToLink[i].to() - executableOffsetFor(jumpsToLink[i].to());
-            
-        JumpLinkType jumpLinkType = MacroAssembler::computeJumpType(jumpsToLink[i], codeOutData + writePtr, target);
-        // Compact branch if we can...
-        if (MacroAssembler::canCompact(jumpsToLink[i].type())) {
-            // Step back in the write stream
-            int32_t delta = MacroAssembler::jumpSizeDelta(jumpsToLink[i].type(), jumpLinkType);
-            if (delta) {
-                writePtr -= delta;
-                recordLinkOffsets(m_assemblerStorage, jumpsToLink[i].from() - delta, readPtr, readPtr - writePtr);
+    if (m_shouldPerformBranchCompaction) {
+        for (unsigned i = 0; i < jumpCount; ++i) {
+            int offset = readPtr - writePtr;
+            ASSERT(!(offset & 1));
+                
+            // Copy the instructions from the last jump to the current one.
+            size_t regionSize = jumpsToLink[i].from() - readPtr;
+            InstructionType* copySource = reinterpret_cast_ptr<InstructionType*>(inData + readPtr);
+            InstructionType* copyEnd = reinterpret_cast_ptr<InstructionType*>(inData + readPtr + regionSize);
+            InstructionType* copyDst = reinterpret_cast_ptr<InstructionType*>(outData + writePtr);
+            ASSERT(!(regionSize % 2));
+            ASSERT(!(readPtr % 2));
+            ASSERT(!(writePtr % 2));
+            while (copySource != copyEnd)
+                *copyDst++ = *copySource++;
+            recordLinkOffsets(m_assemblerStorage, readPtr, jumpsToLink[i].from(), offset);
+            readPtr += regionSize;
+            writePtr += regionSize;
+                
+            // Calculate absolute address of the jump target, in the case of backwards
+            // branches we need to be precise, forward branches we are pessimistic
+            const uint8_t* target;
+            if (jumpsToLink[i].to() >= jumpsToLink[i].from())
+                target = codeOutData + jumpsToLink[i].to() - offset; // Compensate for what we have collapsed so far
+            else
+                target = codeOutData + jumpsToLink[i].to() - executableOffsetFor(jumpsToLink[i].to());
+                
+            JumpLinkType jumpLinkType = MacroAssembler::computeJumpType(jumpsToLink[i], codeOutData + writePtr, target);
+            // Compact branch if we can...
+            if (MacroAssembler::canCompact(jumpsToLink[i].type())) {
+                // Step back in the write stream
+                int32_t delta = MacroAssembler::jumpSizeDelta(jumpsToLink[i].type(), jumpLinkType);
+                if (delta) {
+                    writePtr -= delta;
+                    recordLinkOffsets(m_assemblerStorage, jumpsToLink[i].from() - delta, readPtr, readPtr - writePtr);
+                }
             }
+            jumpsToLink[i].setFrom(writePtr);
         }
-        jumpsToLink[i].setFrom(writePtr);
+    } else {
+        if (!ASSERT_DISABLED) {
+            for (unsigned i = 0; i < jumpCount; ++i)
+                ASSERT(!MacroAssembler::canCompact(jumpsToLink[i].type()));
+        }
     }
     // Copy everything after the last jump
-    memcpy(outData + writePtr, inData + readPtr, m_initialSize - readPtr);
-    recordLinkOffsets(m_assemblerStorage, readPtr, m_initialSize, readPtr - writePtr);
+    memcpy(outData + writePtr, inData + readPtr, initialSize - readPtr);
+    recordLinkOffsets(m_assemblerStorage, readPtr, initialSize, readPtr - writePtr);
         
     for (unsigned i = 0; i < jumpCount; ++i) {
         uint8_t* location = codeOutData + jumpsToLink[i].from();
@@ -162,12 +171,21 @@
     }
 
     jumpsToLink.clear();
-    shrink(writePtr + m_initialSize - readPtr);
 
-    performJITMemcpy(m_code, outBuffer.buffer(), m_size);
+    size_t compactSize = writePtr + initialSize - readPtr;
+    if (m_executableMemory) {
+        m_size = compactSize;
+        m_executableMemory->shrink(m_size);
+    } else {
+        size_t nopSizeInBytes = initialSize - compactSize;
+        bool isCopyingToExecutableMemory = false;
+        MacroAssembler::AssemblerType_T::fillNops(outData + compactSize, nopSizeInBytes, isCopyingToExecutableMemory);
+    }
 
+    performJITMemcpy(m_code, outData, m_size);
+
 #if DUMP_LINK_STATISTICS
-    dumpLinkStatistics(m_code, m_initialSize, m_size);
+    dumpLinkStatistics(m_code, initialSize, m_size);
 #endif
 #if DUMP_CODE
     dumpCode(m_code, m_size);
@@ -182,11 +200,11 @@
 #if defined(ASSEMBLER_HAS_CONSTANT_POOL) && ASSEMBLER_HAS_CONSTANT_POOL
     macroAssembler.m_assembler.buffer().flushConstantPool(false);
 #endif
-    AssemblerBuffer& buffer = macroAssembler.m_assembler.buffer();
-    allocate(buffer.codeSize(), ownerUID, effort);
+    allocate(macroAssembler, ownerUID, effort);
     if (!m_didAllocate)
         return;
     ASSERT(m_code);
+    AssemblerBuffer& buffer = macroAssembler.m_assembler.buffer();
 #if CPU(ARM_TRADITIONAL)
     macroAssembler.m_assembler.prepareExecutableCopy(m_code);
 #endif
@@ -198,19 +216,21 @@
     copyCompactAndLinkCode<uint16_t>(macroAssembler, ownerUID, effort);
 #elif CPU(ARM64)
     copyCompactAndLinkCode<uint32_t>(macroAssembler, ownerUID, effort);
-#endif
+#endif // !ENABLE(BRANCH_COMPACTION)
 
     m_linkTasks = WTFMove(macroAssembler.m_linkTasks);
 }
 
-void LinkBuffer::allocate(size_t initialSize, void* ownerUID, JITCompilationEffort effort)
+void LinkBuffer::allocate(MacroAssembler& macroAssembler, void* ownerUID, JITCompilationEffort effort)
 {
+    size_t initialSize = macroAssembler.m_assembler.codeSize();
     if (m_code) {
         if (initialSize > m_size)
             return;
         
+        size_t nopsToFillInBytes = m_size - initialSize;
+        macroAssembler.emitNops(nopsToFillInBytes);
         m_didAllocate = true;
-        m_size = initialSize;
         return;
     }
     
@@ -223,14 +243,6 @@
     m_didAllocate = true;
 }
 
-void LinkBuffer::shrink(size_t newSize)
-{
-    if (!m_executableMemory)
-        return;
-    m_size = newSize;
-    m_executableMemory->shrink(m_size);
-}
-
 void LinkBuffer::performFinalization()
 {
     for (auto& task : m_linkTasks)

Modified: trunk/Source/_javascript_Core/assembler/LinkBuffer.h (202213 => 202214)


--- trunk/Source/_javascript_Core/assembler/LinkBuffer.h	2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/_javascript_Core/assembler/LinkBuffer.h	2016-06-19 19:42:18 UTC (rev 202214)
@@ -82,9 +82,6 @@
 public:
     LinkBuffer(VM& vm, MacroAssembler& macroAssembler, void* ownerUID, JITCompilationEffort effort = JITCompilationMustSucceed)
         : m_size(0)
-#if ENABLE(BRANCH_COMPACTION)
-        , m_initialSize(0)
-#endif
         , m_didAllocate(false)
         , m_code(0)
         , m_vm(&vm)
@@ -95,11 +92,8 @@
         linkCode(macroAssembler, ownerUID, effort);
     }
 
-    LinkBuffer(MacroAssembler& macroAssembler, void* code, size_t size, JITCompilationEffort effort = JITCompilationMustSucceed)
+    LinkBuffer(MacroAssembler& macroAssembler, void* code, size_t size, JITCompilationEffort effort = JITCompilationMustSucceed, bool shouldPerformBranchCompaction = true)
         : m_size(size)
-#if ENABLE(BRANCH_COMPACTION)
-        , m_initialSize(0)
-#endif
         , m_didAllocate(false)
         , m_code(code)
         , m_vm(0)
@@ -107,6 +101,11 @@
         , m_completed(false)
 #endif
     {
+#if ENABLE(BRANCH_COMPACTION)
+        m_shouldPerformBranchCompaction = shouldPerformBranchCompaction;
+#else
+        UNUSED_PARAM(shouldPerformBranchCompaction);
+#endif
         linkCode(macroAssembler, 0, effort);
     }
 
@@ -250,11 +249,7 @@
         return m_code;
     }
 
-    // FIXME: this does not account for the AssemblerData size!
-    size_t size()
-    {
-        return m_size;
-    }
+    size_t size() const { return m_size; }
     
     bool wasAlreadyDisassembled() const { return m_alreadyDisassembled; }
     void didAlreadyDisassemble() { m_alreadyDisassembled = true; }
@@ -278,7 +273,7 @@
 #endif
         return src;
     }
-    
+
     // Keep this private! - the underlying code should only be obtained externally via finalizeCode().
     void* code()
     {
@@ -285,8 +280,7 @@
         return m_code;
     }
     
-    void allocate(size_t initialSize, void* ownerUID, JITCompilationEffort);
-    void shrink(size_t newSize);
+    void allocate(MacroAssembler&, void* ownerUID, JITCompilationEffort);
 
     JS_EXPORT_PRIVATE void linkCode(MacroAssembler&, void* ownerUID, JITCompilationEffort);
 #if ENABLE(BRANCH_COMPACTION)
@@ -307,8 +301,8 @@
     RefPtr<ExecutableMemoryHandle> m_executableMemory;
     size_t m_size;
 #if ENABLE(BRANCH_COMPACTION)
-    size_t m_initialSize;
     AssemblerData m_assemblerStorage;
+    bool m_shouldPerformBranchCompaction { true };
 #endif
     bool m_didAllocate;
     void* m_code;

Modified: trunk/Source/_javascript_Core/assembler/MacroAssemblerARM64.h (202213 => 202214)


--- trunk/Source/_javascript_Core/assembler/MacroAssemblerARM64.h	2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/_javascript_Core/assembler/MacroAssemblerARM64.h	2016-06-19 19:42:18 UTC (rev 202214)
@@ -3087,6 +3087,14 @@
         return PatchableJump(result);
     }
 
+    PatchableJump patchableBranch32(RelationalCondition cond, Address left, TrustedImm32 imm)
+    {
+        m_makeJumpPatchable = true;
+        Jump result = branch32(cond, left, imm);
+        m_makeJumpPatchable = false;
+        return PatchableJump(result);
+    }
+
     PatchableJump patchableBranch64(RelationalCondition cond, RegisterID reg, TrustedImm64 imm)
     {
         m_makeJumpPatchable = true;

Modified: trunk/Source/_javascript_Core/assembler/MacroAssemblerARMv7.h (202213 => 202214)


--- trunk/Source/_javascript_Core/assembler/MacroAssemblerARMv7.h	2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/_javascript_Core/assembler/MacroAssemblerARMv7.h	2016-06-19 19:42:18 UTC (rev 202214)
@@ -1868,6 +1868,14 @@
         return PatchableJump(result);
     }
 
+    PatchableJump patchableBranch32(RelationalCondition cond, Address left, TrustedImm32 imm)
+    {
+        m_makeJumpPatchable = true;
+        Jump result = branch32(cond, left, imm);
+        m_makeJumpPatchable = false;
+        return PatchableJump(result);
+    }
+
     PatchableJump patchableBranchPtrWithPatch(RelationalCondition cond, Address left, DataLabelPtr& dataLabel, TrustedImmPtr initialRightValue = TrustedImmPtr(0))
     {
         m_makeJumpPatchable = true;

Modified: trunk/Source/_javascript_Core/assembler/X86Assembler.h (202213 => 202214)


--- trunk/Source/_javascript_Core/assembler/X86Assembler.h	2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/_javascript_Core/assembler/X86Assembler.h	2016-06-19 19:42:18 UTC (rev 202214)
@@ -2946,8 +2946,9 @@
         m_formatter.oneByteOp(OP_NOP);
     }
 
-    static void fillNops(void* base, size_t size)
+    static void fillNops(void* base, size_t size, bool isCopyingToExecutableMemory)
     {
+        UNUSED_PARAM(isCopyingToExecutableMemory);
 #if CPU(X86_64)
         static const uint8_t nops[10][10] = {
             // nop

Modified: trunk/Source/_javascript_Core/bytecode/CodeBlock.cpp (202213 => 202214)


--- trunk/Source/_javascript_Core/bytecode/CodeBlock.cpp	2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/_javascript_Core/bytecode/CodeBlock.cpp	2016-06-19 19:42:18 UTC (rev 202214)
@@ -441,6 +441,9 @@
         case CacheType::Unset:
             out.printf("unset");
             break;
+        case CacheType::ArrayLength:
+            out.printf("ArrayLength");
+            break;
         default:
             RELEASE_ASSERT_NOT_REACHED();
             break;

Added: trunk/Source/_javascript_Core/bytecode/InlineAccess.cpp (0 => 202214)


--- trunk/Source/_javascript_Core/bytecode/InlineAccess.cpp	                        (rev 0)
+++ trunk/Source/_javascript_Core/bytecode/InlineAccess.cpp	2016-06-19 19:42:18 UTC (rev 202214)
@@ -0,0 +1,299 @@
+/*
+ * Copyright (C) 2016 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#include "config.h"
+#include "InlineAccess.h"
+
+#if ENABLE(JIT)
+
+#include "CCallHelpers.h"
+#include "JSArray.h"
+#include "JSCellInlines.h"
+#include "LinkBuffer.h"
+#include "ScratchRegisterAllocator.h"
+#include "Structure.h"
+#include "StructureStubInfo.h"
+#include "VM.h"
+
+namespace JSC {
+
+void InlineAccess::dumpCacheSizesAndCrash(VM& vm)
+{
+    GPRReg base = GPRInfo::regT0;
+    GPRReg value = GPRInfo::regT1;
+#if USE(JSVALUE32_64)
+    JSValueRegs regs(base, value);
+#else
+    JSValueRegs regs(base);
+#endif
+
+    {
+        CCallHelpers jit(&vm);
+
+        GPRReg scratchGPR = value;
+        jit.load8(CCallHelpers::Address(base, JSCell::indexingTypeOffset()), value);
+        jit.and32(CCallHelpers::TrustedImm32(IsArray | IndexingShapeMask), value);
+        jit.patchableBranch32(
+            CCallHelpers::NotEqual, value, CCallHelpers::TrustedImm32(IsArray | ContiguousShape));
+        jit.loadPtr(CCallHelpers::Address(base, JSObject::butterflyOffset()), value);
+        jit.load32(CCallHelpers::Address(value, ArrayStorage::lengthOffset()), value);
+        jit.boxInt32(scratchGPR, regs);
+
+        dataLog("array length size: ", jit.m_assembler.buffer().codeSize(), "\n");
+    }
+
+    {
+        CCallHelpers jit(&vm);
+
+        jit.patchableBranch32(
+            MacroAssembler::NotEqual,
+            MacroAssembler::Address(base, JSCell::structureIDOffset()),
+            MacroAssembler::TrustedImm32(0x000ab21ca));
+        jit.loadPtr(
+            CCallHelpers::Address(base, JSObject::butterflyOffset()),
+            value);
+        GPRReg storageGPR = value;
+        jit.loadValue(
+            CCallHelpers::Address(storageGPR, 0x000ab21ca), regs);
+
+        dataLog("out of line offset cache size: ", jit.m_assembler.buffer().codeSize(), "\n");
+    }
+
+    {
+        CCallHelpers jit(&vm);
+
+        jit.patchableBranch32(
+            MacroAssembler::NotEqual,
+            MacroAssembler::Address(base, JSCell::structureIDOffset()),
+            MacroAssembler::TrustedImm32(0x000ab21ca));
+        jit.loadValue(
+            MacroAssembler::Address(base, 0x000ab21ca), regs);
+
+        dataLog("inline offset cache size: ", jit.m_assembler.buffer().codeSize(), "\n");
+    }
+
+    {
+        CCallHelpers jit(&vm);
+
+        jit.patchableBranch32(
+            MacroAssembler::NotEqual,
+            MacroAssembler::Address(base, JSCell::structureIDOffset()),
+            MacroAssembler::TrustedImm32(0x000ab21ca));
+
+        jit.storeValue(
+            regs, MacroAssembler::Address(base, 0x000ab21ca));
+
+        dataLog("replace cache size: ", jit.m_assembler.buffer().codeSize(), "\n");
+    }
+
+    {
+        CCallHelpers jit(&vm);
+
+        jit.patchableBranch32(
+            MacroAssembler::NotEqual,
+            MacroAssembler::Address(base, JSCell::structureIDOffset()),
+            MacroAssembler::TrustedImm32(0x000ab21ca));
+
+        jit.loadPtr(MacroAssembler::Address(base, JSObject::butterflyOffset()), value);
+        jit.storeValue(
+            regs,
+            MacroAssembler::Address(base, 120342));
+
+        dataLog("replace out of line cache size: ", jit.m_assembler.buffer().codeSize(), "\n");
+    }
+
+    CRASH();
+}
+
+
+template <typename Function>
+ALWAYS_INLINE static bool linkCodeInline(const char* name, CCallHelpers& jit, StructureStubInfo& stubInfo, const Function& function)
+{
+    if (jit.m_assembler.buffer().codeSize() <= stubInfo.patch.inlineSize) {
+        bool needsBranchCompaction = false;
+        LinkBuffer linkBuffer(jit, stubInfo.patch.start.dataLocation(), stubInfo.patch.inlineSize, JITCompilationMustSucceed, needsBranchCompaction);
+        ASSERT(linkBuffer.isValid());
+        function(linkBuffer);
+        FINALIZE_CODE(linkBuffer, ("InlineAccessType: '%s'", name));
+        return true;
+    }
+
+    // This is helpful when determining the size for inline ICs on various
+    // platforms. You want to choose a size that usually succeeds, but sometimes
+    // there may be variability in the length of the code we generate just because
+    // of randomness. It's helpful to flip this on when running tests or browsing
+    // the web just to see how often it fails. You don't want an IC size that always fails.
+    const bool failIfCantInline = false;
+    if (failIfCantInline) {
+        dataLog("Failure for: ", name, "\n");
+        dataLog("real size: ", jit.m_assembler.buffer().codeSize(), " inline size:", stubInfo.patch.inlineSize, "\n");
+        CRASH();
+    }
+
+    return false;
+}
+
+bool InlineAccess::generateSelfPropertyAccess(VM& vm, StructureStubInfo& stubInfo, Structure* structure, PropertyOffset offset)
+{
+    CCallHelpers jit(&vm);
+
+    GPRReg base = static_cast<GPRReg>(stubInfo.patch.baseGPR);
+    JSValueRegs value = stubInfo.valueRegs();
+
+    auto branchToSlowPath = jit.patchableBranch32(
+        MacroAssembler::NotEqual,
+        MacroAssembler::Address(base, JSCell::structureIDOffset()),
+        MacroAssembler::TrustedImm32(bitwise_cast<uint32_t>(structure->id())));
+    GPRReg storage;
+    if (isInlineOffset(offset))
+        storage = base;
+    else {
+        jit.loadPtr(CCallHelpers::Address(base, JSObject::butterflyOffset()), value.payloadGPR());
+        storage = value.payloadGPR();
+    }
+
+    jit.loadValue(
+        MacroAssembler::Address(storage, offsetRelativeToBase(offset)), value);
+
+    bool linkedCodeInline = linkCodeInline("property access", jit, stubInfo, [&] (LinkBuffer& linkBuffer) {
+        linkBuffer.link(branchToSlowPath, stubInfo.slowPathStartLocation());
+    });
+    return linkedCodeInline;
+}
+
+ALWAYS_INLINE static GPRReg getScratchRegister(StructureStubInfo& stubInfo)
+{
+    ScratchRegisterAllocator allocator(stubInfo.patch.usedRegisters);
+    allocator.lock(static_cast<GPRReg>(stubInfo.patch.baseGPR));
+    allocator.lock(static_cast<GPRReg>(stubInfo.patch.valueGPR));
+#if USE(JSVALUE32_64)
+    allocator.lock(static_cast<GPRReg>(stubInfo.patch.baseTagGPR));
+    allocator.lock(static_cast<GPRReg>(stubInfo.patch.valueTagGPR));
+#endif
+    GPRReg scratch = allocator.allocateScratchGPR();
+    if (allocator.didReuseRegisters())
+        return InvalidGPRReg;
+    return scratch;
+}
+
+ALWAYS_INLINE static bool hasFreeRegister(StructureStubInfo& stubInfo)
+{
+    return getScratchRegister(stubInfo) != InvalidGPRReg;
+}
+
+bool InlineAccess::canGenerateSelfPropertyReplace(StructureStubInfo& stubInfo, PropertyOffset offset)
+{
+    if (isInlineOffset(offset))
+        return true;
+
+    return hasFreeRegister(stubInfo);
+}
+
+bool InlineAccess::generateSelfPropertyReplace(VM& vm, StructureStubInfo& stubInfo, Structure* structure, PropertyOffset offset)
+{
+    ASSERT(canGenerateSelfPropertyReplace(stubInfo, offset));
+
+    CCallHelpers jit(&vm);
+
+    GPRReg base = static_cast<GPRReg>(stubInfo.patch.baseGPR);
+    JSValueRegs value = stubInfo.valueRegs();
+
+    auto branchToSlowPath = jit.patchableBranch32(
+        MacroAssembler::NotEqual,
+        MacroAssembler::Address(base, JSCell::structureIDOffset()),
+        MacroAssembler::TrustedImm32(bitwise_cast<uint32_t>(structure->id())));
+
+    GPRReg storage;
+    if (isInlineOffset(offset))
+        storage = base;
+    else {
+        storage = getScratchRegister(stubInfo);
+        ASSERT(storage != InvalidGPRReg);
+        jit.loadPtr(CCallHelpers::Address(base, JSObject::butterflyOffset()), storage);
+    }
+
+    jit.storeValue(
+        value, MacroAssembler::Address(storage, offsetRelativeToBase(offset)));
+
+    bool linkedCodeInline = linkCodeInline("property replace", jit, stubInfo, [&] (LinkBuffer& linkBuffer) {
+        linkBuffer.link(branchToSlowPath, stubInfo.slowPathStartLocation());
+    });
+    return linkedCodeInline;
+}
+
+bool InlineAccess::isCacheableArrayLength(StructureStubInfo& stubInfo, JSArray* array)
+{
+    ASSERT(array->indexingType() & IsArray);
+
+    if (!hasFreeRegister(stubInfo))
+        return false;
+
+    return array->indexingType() == ArrayWithInt32
+        || array->indexingType() == ArrayWithDouble
+        || array->indexingType() == ArrayWithContiguous;
+}
+
+bool InlineAccess::generateArrayLength(VM& vm, StructureStubInfo& stubInfo, JSArray* array)
+{
+    ASSERT(isCacheableArrayLength(stubInfo, array));
+
+    CCallHelpers jit(&vm);
+
+    GPRReg base = static_cast<GPRReg>(stubInfo.patch.baseGPR);
+    JSValueRegs value = stubInfo.valueRegs();
+    GPRReg scratch = getScratchRegister(stubInfo);
+
+    jit.load8(CCallHelpers::Address(base, JSCell::indexingTypeOffset()), scratch);
+    jit.and32(CCallHelpers::TrustedImm32(IsArray | IndexingShapeMask), scratch);
+    auto branchToSlowPath = jit.patchableBranch32(
+        CCallHelpers::NotEqual, scratch, CCallHelpers::TrustedImm32(array->indexingType()));
+    jit.loadPtr(CCallHelpers::Address(base, JSObject::butterflyOffset()), value.payloadGPR());
+    jit.load32(CCallHelpers::Address(value.payloadGPR(), ArrayStorage::lengthOffset()), value.payloadGPR());
+    jit.boxInt32(value.payloadGPR(), value);
+
+    bool linkedCodeInline = linkCodeInline("array length", jit, stubInfo, [&] (LinkBuffer& linkBuffer) {
+        linkBuffer.link(branchToSlowPath, stubInfo.slowPathStartLocation());
+    });
+    return linkedCodeInline;
+}
+
+void InlineAccess::rewireStubAsJump(VM& vm, StructureStubInfo& stubInfo, CodeLocationLabel target)
+{
+    CCallHelpers jit(&vm);
+
+    auto jump = jit.jump();
+
+    // We don't need a nop sled here because nobody should be jumping into the middle of an IC.
+    bool needsBranchCompaction = false;
+    LinkBuffer linkBuffer(jit, stubInfo.patch.start.dataLocation(), jit.m_assembler.buffer().codeSize(), JITCompilationMustSucceed, needsBranchCompaction);
+    RELEASE_ASSERT(linkBuffer.isValid());
+    linkBuffer.link(jump, target);
+
+    FINALIZE_CODE(linkBuffer, ("InlineAccess: linking constant jump"));
+}
+
+} // namespace JSC
+
+#endif // ENABLE(JIT)

Added: trunk/Source/_javascript_Core/bytecode/InlineAccess.h (0 => 202214)


--- trunk/Source/_javascript_Core/bytecode/InlineAccess.h	                        (rev 0)
+++ trunk/Source/_javascript_Core/bytecode/InlineAccess.h	2016-06-19 19:42:18 UTC (rev 202214)
@@ -0,0 +1,119 @@
+/*
+ * Copyright (C) 2016 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#ifndef InlineAccess_h
+#define InlineAccess_h
+
+#if ENABLE(JIT)
+
+#include "CodeLocation.h"
+#include "PropertyOffset.h"
+
+namespace JSC {
+
+class JSArray;
+class Structure;
+class StructureStubInfo;
+class VM;
+
+class InlineAccess {
+public:
+
+    static constexpr size_t sizeForPropertyAccess()
+    {
+#if CPU(X86_64)
+        return 23;
+#elif CPU(X86)
+        return 27;
+#elif CPU(ARM64)
+        return 40;
+#elif CPU(ARM)
+#if CPU(ARM_THUMB2)
+        return 48;
+#else
+        return 50;
+#endif
+#else
+#error "unsupported platform"
+#endif
+    }
+
+    static constexpr size_t sizeForPropertyReplace()
+    {
+#if CPU(X86_64)
+        return 23;
+#elif CPU(X86)
+        return 27;
+#elif CPU(ARM64)
+        return 40;
+#elif CPU(ARM)
+#if CPU(ARM_THUMB2)
+        return 48;
+#else
+        return 50;
+#endif
+#else
+#error "unsupported platform"
+#endif
+    }
+
+    static constexpr size_t sizeForLengthAccess()
+    {
+#if CPU(X86_64)
+        return 26;
+#elif CPU(X86)
+        return 27;
+#elif CPU(ARM64)
+        return 32;
+#elif CPU(ARM)
+#if CPU(ARM_THUMB2)
+        return 30;
+#else
+        return 50;
+#endif
+#else
+#error "unsupported platform"
+#endif
+    }
+
+    static bool generateSelfPropertyAccess(VM&, StructureStubInfo&, Structure*, PropertyOffset);
+    static bool canGenerateSelfPropertyReplace(StructureStubInfo&, PropertyOffset);
+    static bool generateSelfPropertyReplace(VM&, StructureStubInfo&, Structure*, PropertyOffset);
+    static bool isCacheableArrayLength(StructureStubInfo&, JSArray*);
+    static bool generateArrayLength(VM&, StructureStubInfo&, JSArray*);
+    static void rewireStubAsJump(VM&, StructureStubInfo&, CodeLocationLabel);
+
+    // This is helpful when determining the size of an IC on
+    // various platforms. When adding a new type of IC, implement
+    // its placeholder code here, and log the size. That way we
+    // can intelligently choose sizes on various platforms.
+    NO_RETURN_DUE_TO_CRASH void dumpCacheSizesAndCrash(VM&);
+};
+
+} // namespace JSC
+
+#endif // ENABLE(JIT)
+
+#endif // InlineAccess_h

Modified: trunk/Source/_javascript_Core/bytecode/PolymorphicAccess.cpp (202213 => 202214)


--- trunk/Source/_javascript_Core/bytecode/PolymorphicAccess.cpp	2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/_javascript_Core/bytecode/PolymorphicAccess.cpp	2016-06-19 19:42:18 UTC (rev 202214)
@@ -1551,11 +1551,7 @@
     state.ident = &ident;
     
     state.baseGPR = static_cast<GPRReg>(stubInfo.patch.baseGPR);
-    state.valueRegs = JSValueRegs(
-#if USE(JSVALUE32_64)
-        static_cast<GPRReg>(stubInfo.patch.valueTagGPR),
-#endif
-        static_cast<GPRReg>(stubInfo.patch.valueGPR));
+    state.valueRegs = stubInfo.valueRegs();
 
     ScratchRegisterAllocator allocator(stubInfo.patch.usedRegisters);
     state.allocator = &allocator;
@@ -1753,14 +1749,11 @@
         return AccessGenerationResult::GaveUp;
     }
 
-    CodeLocationLabel successLabel =
-        stubInfo.callReturnLocation.labelAtOffset(stubInfo.patch.deltaCallToDone);
+    CodeLocationLabel successLabel = stubInfo.doneLocation();
         
     linkBuffer.link(state.success, successLabel);
 
-    linkBuffer.link(
-        failure,
-        stubInfo.callReturnLocation.labelAtOffset(stubInfo.patch.deltaCallToSlowCase));
+    linkBuffer.link(failure, stubInfo.slowPathStartLocation());
     
     if (verbose)
         dataLog(*codeBlock, " ", stubInfo.codeOrigin, ": Generating polymorphic access stub for ", listDump(cases), "\n");

Modified: trunk/Source/_javascript_Core/bytecode/StructureStubInfo.cpp (202213 => 202214)


--- trunk/Source/_javascript_Core/bytecode/StructureStubInfo.cpp	2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/_javascript_Core/bytecode/StructureStubInfo.cpp	2016-06-19 19:42:18 UTC (rev 202214)
@@ -63,6 +63,11 @@
     u.byIdSelf.offset = offset;
 }
 
+void StructureStubInfo::initArrayLength()
+{
+    cacheType = CacheType::ArrayLength;
+}
+
 void StructureStubInfo::initPutByIdReplace(CodeBlock* codeBlock, Structure* baseObjectStructure, PropertyOffset offset)
 {
     cacheType = CacheType::PutByIdReplace;
@@ -87,6 +92,7 @@
     case CacheType::Unset:
     case CacheType::GetByIdSelf:
     case CacheType::PutByIdReplace:
+    case CacheType::ArrayLength:
         return;
     }
 
@@ -102,6 +108,7 @@
     case CacheType::Unset:
     case CacheType::GetByIdSelf:
     case CacheType::PutByIdReplace:
+    case CacheType::ArrayLength:
         return;
     }
 
@@ -257,6 +264,7 @@
 {
     switch (cacheType) {
     case CacheType::Unset:
+    case CacheType::ArrayLength:
         return true;
     case CacheType::GetByIdSelf:
     case CacheType::PutByIdReplace:
@@ -275,6 +283,7 @@
         return false;
     return u.stub->containsPC(pc);
 }
-#endif
 
+#endif // ENABLE(JIT)
+
 } // namespace JSC

Modified: trunk/Source/_javascript_Core/bytecode/StructureStubInfo.h (202213 => 202214)


--- trunk/Source/_javascript_Core/bytecode/StructureStubInfo.h	2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/_javascript_Core/bytecode/StructureStubInfo.h	2016-06-19 19:42:18 UTC (rev 202214)
@@ -57,7 +57,8 @@
     Unset,
     GetByIdSelf,
     PutByIdReplace,
-    Stub
+    Stub,
+    ArrayLength
 };
 
 class StructureStubInfo {
@@ -68,6 +69,7 @@
     ~StructureStubInfo();
 
     void initGetByIdSelf(CodeBlock*, Structure* baseObjectStructure, PropertyOffset);
+    void initArrayLength();
     void initPutByIdReplace(CodeBlock*, Structure* baseObjectStructure, PropertyOffset);
     void initStub(CodeBlock*, std::unique_ptr<PolymorphicAccess>);
 
@@ -143,13 +145,11 @@
         return false;
     }
 
-    CodeLocationCall callReturnLocation;
+    bool containsPC(void* pc) const;
 
     CodeOrigin codeOrigin;
     CallSiteIndex callSiteIndex;
 
-    bool containsPC(void* pc) const;
-
     union {
         struct {
             WriteBarrierBase<Structure> baseObjectStructure;
@@ -165,25 +165,39 @@
     StructureSet bufferedStructures;
     
     struct {
+        CodeLocationLabel start; // This is either the start of the inline IC for *byId caches, or the location of patchable jump for 'in' caches.
+        RegisterSet usedRegisters;
+        uint32_t inlineSize;
+        int32_t deltaFromStartToSlowPathCallLocation;
+        int32_t deltaFromStartToSlowPathStart;
+
         int8_t baseGPR;
+        int8_t valueGPR;
 #if USE(JSVALUE32_64)
         int8_t valueTagGPR;
         int8_t baseTagGPR;
 #endif
-        int8_t valueGPR;
-        RegisterSet usedRegisters;
-        int32_t deltaCallToDone;
-        int32_t deltaCallToJump;
-        int32_t deltaCallToSlowCase;
-        int32_t deltaCheckImmToCall;
-#if USE(JSVALUE64)
-        int32_t deltaCallToLoadOrStore;
-#else
-        int32_t deltaCallToTagLoadOrStore;
-        int32_t deltaCallToPayloadLoadOrStore;
-#endif
     } patch;
 
+    CodeLocationCall slowPathCallLocation() { return patch.start.callAtOffset(patch.deltaFromStartToSlowPathCallLocation); }
+    CodeLocationLabel doneLocation() { return patch.start.labelAtOffset(patch.inlineSize); }
+    CodeLocationLabel slowPathStartLocation() { return patch.start.labelAtOffset(patch.deltaFromStartToSlowPathStart); }
+    CodeLocationJump patchableJumpForIn()
+    { 
+        ASSERT(accessType == AccessType::In);
+        return patch.start.jumpAtOffset(0);
+    }
+
+    JSValueRegs valueRegs() const
+    {
+        return JSValueRegs(
+#if USE(JSVALUE32_64)
+            static_cast<GPRReg>(patch.valueTagGPR),
+#endif
+            static_cast<GPRReg>(patch.valueGPR));
+    }
+
+
     AccessType accessType;
     CacheType cacheType;
     uint8_t countdown; // We repatch only when this is zero. If not zero, we decrement.

Modified: trunk/Source/_javascript_Core/dfg/DFGJITCompiler.cpp (202213 => 202214)


--- trunk/Source/_javascript_Core/dfg/DFGJITCompiler.cpp	2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/_javascript_Core/dfg/DFGJITCompiler.cpp	2016-06-19 19:42:18 UTC (rev 202214)
@@ -258,11 +258,20 @@
 
     for (unsigned i = 0; i < m_ins.size(); ++i) {
         StructureStubInfo& info = *m_ins[i].m_stubInfo;
-        CodeLocationCall callReturnLocation = linkBuffer.locationOf(m_ins[i].m_slowPathGenerator->call());
-        info.patch.deltaCallToDone = differenceBetweenCodePtr(callReturnLocation, linkBuffer.locationOf(m_ins[i].m_done));
-        info.patch.deltaCallToJump = differenceBetweenCodePtr(callReturnLocation, linkBuffer.locationOf(m_ins[i].m_jump));
-        info.callReturnLocation = callReturnLocation;
-        info.patch.deltaCallToSlowCase = differenceBetweenCodePtr(callReturnLocation, linkBuffer.locationOf(m_ins[i].m_slowPathGenerator->label()));
+
+        CodeLocationLabel start = linkBuffer.locationOf(m_ins[i].m_jump);
+        info.patch.start = start;
+
+        ptrdiff_t inlineSize = MacroAssembler::differenceBetweenCodePtr(
+            start, linkBuffer.locationOf(m_ins[i].m_done));
+        RELEASE_ASSERT(inlineSize >= 0);
+        info.patch.inlineSize = inlineSize;
+
+        info.patch.deltaFromStartToSlowPathCallLocation = MacroAssembler::differenceBetweenCodePtr(
+            start, linkBuffer.locationOf(m_ins[i].m_slowPathGenerator->call()));
+
+        info.patch.deltaFromStartToSlowPathStart = MacroAssembler::differenceBetweenCodePtr(
+            start, linkBuffer.locationOf(m_ins[i].m_slowPathGenerator->label()));
     }
     
     for (unsigned i = 0; i < m_jsCalls.size(); ++i) {

Modified: trunk/Source/_javascript_Core/dfg/DFGOSRExitCompilerCommon.cpp (202213 => 202214)


--- trunk/Source/_javascript_Core/dfg/DFGOSRExitCompilerCommon.cpp	2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/_javascript_Core/dfg/DFGOSRExitCompilerCommon.cpp	2016-06-19 19:42:18 UTC (rev 202214)
@@ -186,8 +186,7 @@
                     baselineCodeBlockForCaller->findStubInfo(CodeOrigin(callBytecodeIndex));
                 RELEASE_ASSERT(stubInfo);
 
-                jumpTarget = stubInfo->callReturnLocation.labelAtOffset(
-                    stubInfo->patch.deltaCallToDone).executableAddress();
+                jumpTarget = stubInfo->doneLocation().executableAddress();
                 break;
             }
 

Modified: trunk/Source/_javascript_Core/dfg/DFGSpeculativeJIT32_64.cpp (202213 => 202214)


--- trunk/Source/_javascript_Core/dfg/DFGSpeculativeJIT32_64.cpp	2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/_javascript_Core/dfg/DFGSpeculativeJIT32_64.cpp	2016-06-19 19:42:18 UTC (rev 202214)
@@ -197,9 +197,8 @@
     
     CallSiteIndex callSite = m_jit.recordCallSiteAndGenerateExceptionHandlingOSRExitIfNeeded(codeOrigin, m_stream->size());
     JITGetByIdGenerator gen(
-        m_jit.codeBlock(), codeOrigin, callSite, usedRegisters,
-        JSValueRegs(baseTagGPROrNone, basePayloadGPR),
-        JSValueRegs(resultTagGPR, resultPayloadGPR), type);
+        m_jit.codeBlock(), codeOrigin, callSite, usedRegisters, identifierUID(identifierNumber),
+        JSValueRegs(baseTagGPROrNone, basePayloadGPR), JSValueRegs(resultTagGPR, resultPayloadGPR), type);
     
     gen.generateFastPath(m_jit);
     

Modified: trunk/Source/_javascript_Core/dfg/DFGSpeculativeJIT64.cpp (202213 => 202214)


--- trunk/Source/_javascript_Core/dfg/DFGSpeculativeJIT64.cpp	2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/_javascript_Core/dfg/DFGSpeculativeJIT64.cpp	2016-06-19 19:42:18 UTC (rev 202214)
@@ -168,8 +168,8 @@
         usedRegisters.set(resultGPR, false);
     }
     JITGetByIdGenerator gen(
-        m_jit.codeBlock(), codeOrigin, callSite, usedRegisters, JSValueRegs(baseGPR),
-        JSValueRegs(resultGPR), type);
+        m_jit.codeBlock(), codeOrigin, callSite, usedRegisters, identifierUID(identifierNumber),
+        JSValueRegs(baseGPR), JSValueRegs(resultGPR), type);
     gen.generateFastPath(m_jit);
     
     JITCompiler::JumpList slowCases;

Modified: trunk/Source/_javascript_Core/ftl/FTLLowerDFGToB3.cpp (202213 => 202214)


--- trunk/Source/_javascript_Core/ftl/FTLLowerDFGToB3.cpp	2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/_javascript_Core/ftl/FTLLowerDFGToB3.cpp	2016-06-19 19:42:18 UTC (rev 202214)
@@ -6183,21 +6183,19 @@
 
                                 jit.addLinkTask(
                                     [=] (LinkBuffer& linkBuffer) {
-                                        CodeLocationCall callReturnLocation =
-                                            linkBuffer.locationOf(slowPathCall);
-                                        stubInfo->patch.deltaCallToDone =
-                                            CCallHelpers::differenceBetweenCodePtr(
-                                                callReturnLocation,
-                                                linkBuffer.locationOf(done));
-                                        stubInfo->patch.deltaCallToJump =
-                                            CCallHelpers::differenceBetweenCodePtr(
-                                                callReturnLocation,
-                                                linkBuffer.locationOf(jump));
-                                        stubInfo->callReturnLocation = callReturnLocation;
-                                        stubInfo->patch.deltaCallToSlowCase =
-                                            CCallHelpers::differenceBetweenCodePtr(
-                                                callReturnLocation,
-                                                linkBuffer.locationOf(slowPathBegin));
+                                        CodeLocationLabel start = linkBuffer.locationOf(jump);
+                                        stubInfo->patch.start = start;
+                                        ptrdiff_t inlineSize = MacroAssembler::differenceBetweenCodePtr(
+                                            start, linkBuffer.locationOf(done));
+                                        RELEASE_ASSERT(inlineSize >= 0);
+                                        stubInfo->patch.inlineSize = inlineSize;
+
+                                        stubInfo->patch.deltaFromStartToSlowPathCallLocation = MacroAssembler::differenceBetweenCodePtr(
+                                            start, linkBuffer.locationOf(slowPathCall));
+
+                                        stubInfo->patch.deltaFromStartToSlowPathStart = MacroAssembler::differenceBetweenCodePtr(
+                                            start, linkBuffer.locationOf(slowPathBegin));
+
                                     });
                             });
                     });
@@ -7616,7 +7614,7 @@
 
                 auto generator = Box<JITGetByIdGenerator>::create(
                     jit.codeBlock(), node->origin.semantic, callSiteIndex,
-                    params.unavailableRegisters(), JSValueRegs(params[1].gpr()),
+                    params.unavailableRegisters(), uid, JSValueRegs(params[1].gpr()),
                     JSValueRegs(params[0].gpr()), type);
 
                 generator->generateFastPath(jit);

Modified: trunk/Source/_javascript_Core/jit/JITInlineCacheGenerator.cpp (202213 => 202214)


--- trunk/Source/_javascript_Core/jit/JITInlineCacheGenerator.cpp	2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/_javascript_Core/jit/JITInlineCacheGenerator.cpp	2016-06-19 19:42:18 UTC (rev 202214)
@@ -29,8 +29,9 @@
 #if ENABLE(JIT)
 
 #include "CodeBlock.h"
+#include "InlineAccess.h"
+#include "JSCInlines.h"
 #include "LinkBuffer.h"
-#include "JSCInlines.h"
 #include "StructureStubInfo.h"
 
 namespace JSC {
@@ -69,25 +70,19 @@
 
 void JITByIdGenerator::finalize(LinkBuffer& fastPath, LinkBuffer& slowPath)
 {
-    CodeLocationCall callReturnLocation = slowPath.locationOf(m_call);
-    m_stubInfo->callReturnLocation = callReturnLocation;
-    m_stubInfo->patch.deltaCheckImmToCall = MacroAssembler::differenceBetweenCodePtr(
-        fastPath.locationOf(m_structureImm), callReturnLocation);
-    m_stubInfo->patch.deltaCallToJump = MacroAssembler::differenceBetweenCodePtr(
-        callReturnLocation, fastPath.locationOf(m_structureCheck));
-#if USE(JSVALUE64)
-    m_stubInfo->patch.deltaCallToLoadOrStore = MacroAssembler::differenceBetweenCodePtr(
-        callReturnLocation, fastPath.locationOf(m_loadOrStore));
-#else
-    m_stubInfo->patch.deltaCallToTagLoadOrStore = MacroAssembler::differenceBetweenCodePtr(
-        callReturnLocation, fastPath.locationOf(m_tagLoadOrStore));
-    m_stubInfo->patch.deltaCallToPayloadLoadOrStore = MacroAssembler::differenceBetweenCodePtr(
-        callReturnLocation, fastPath.locationOf(m_loadOrStore));
-#endif
-    m_stubInfo->patch.deltaCallToSlowCase = MacroAssembler::differenceBetweenCodePtr(
-        callReturnLocation, slowPath.locationOf(m_slowPathBegin));
-    m_stubInfo->patch.deltaCallToDone = MacroAssembler::differenceBetweenCodePtr(
-        callReturnLocation, fastPath.locationOf(m_done));
+    ASSERT(m_start.isSet());
+    CodeLocationLabel start = fastPath.locationOf(m_start);
+    m_stubInfo->patch.start = start;
+
+    int32_t inlineSize = MacroAssembler::differenceBetweenCodePtr(
+        start, fastPath.locationOf(m_done));
+    ASSERT(inlineSize > 0);
+    m_stubInfo->patch.inlineSize = inlineSize;
+
+    m_stubInfo->patch.deltaFromStartToSlowPathCallLocation = MacroAssembler::differenceBetweenCodePtr(
+        start, slowPath.locationOf(m_slowPathCall));
+    m_stubInfo->patch.deltaFromStartToSlowPathStart = MacroAssembler::differenceBetweenCodePtr(
+        start, slowPath.locationOf(m_slowPathBegin));
 }
 
 void JITByIdGenerator::finalize(LinkBuffer& linkBuffer)
@@ -95,19 +90,23 @@
     finalize(linkBuffer, linkBuffer);
 }
 
-void JITByIdGenerator::generateFastPathChecks(MacroAssembler& jit)
+void JITByIdGenerator::generateFastCommon(MacroAssembler& jit, size_t inlineICSize)
 {
-    m_structureCheck = jit.patchableBranch32WithPatch(
-        MacroAssembler::NotEqual,
-        MacroAssembler::Address(m_base.payloadGPR(), JSCell::structureIDOffset()),
-        m_structureImm, MacroAssembler::TrustedImm32(0));
+    m_start = jit.label();
+    size_t startSize = jit.m_assembler.buffer().codeSize();
+    m_slowPathJump = jit.jump();
+    size_t jumpSize = jit.m_assembler.buffer().codeSize() - startSize;
+    size_t nopsToEmitInBytes = inlineICSize - jumpSize;
+    jit.emitNops(nopsToEmitInBytes);
+    ASSERT(jit.m_assembler.buffer().codeSize() - startSize == inlineICSize);
+    m_done = jit.label();
 }
 
 JITGetByIdGenerator::JITGetByIdGenerator(
     CodeBlock* codeBlock, CodeOrigin codeOrigin, CallSiteIndex callSite, const RegisterSet& usedRegisters,
-    JSValueRegs base, JSValueRegs value, AccessType accessType)
-    : JITByIdGenerator(
-        codeBlock, codeOrigin, callSite, accessType, usedRegisters, base, value)
+    UniquedStringImpl* propertyName, JSValueRegs base, JSValueRegs value, AccessType accessType)
+    : JITByIdGenerator(codeBlock, codeOrigin, callSite, accessType, usedRegisters, base, value)
+    , m_isLengthAccess(propertyName == codeBlock->vm()->propertyNames->length.impl())
 {
     RELEASE_ASSERT(base.payloadGPR() != value.tagGPR());
 }
@@ -114,19 +113,7 @@
 
 void JITGetByIdGenerator::generateFastPath(MacroAssembler& jit)
 {
-    generateFastPathChecks(jit);
-    
-#if USE(JSVALUE64)
-    m_loadOrStore = jit.load64WithCompactAddressOffsetPatch(
-        MacroAssembler::Address(m_base.payloadGPR(), 0), m_value.payloadGPR()).label();
-#else
-    m_tagLoadOrStore = jit.load32WithCompactAddressOffsetPatch(
-        MacroAssembler::Address(m_base.payloadGPR(), 0), m_value.tagGPR()).label();
-    m_loadOrStore = jit.load32WithCompactAddressOffsetPatch(
-        MacroAssembler::Address(m_base.payloadGPR(), 0), m_value.payloadGPR()).label();
-#endif
-    
-    m_done = jit.label();
+    generateFastCommon(jit, m_isLengthAccess ? InlineAccess::sizeForLengthAccess() : InlineAccess::sizeForPropertyAccess());
 }
 
 JITPutByIdGenerator::JITPutByIdGenerator(
@@ -143,19 +130,7 @@
 
 void JITPutByIdGenerator::generateFastPath(MacroAssembler& jit)
 {
-    generateFastPathChecks(jit);
-    
-#if USE(JSVALUE64)
-    m_loadOrStore = jit.store64WithAddressOffsetPatch(
-        m_value.payloadGPR(), MacroAssembler::Address(m_base.payloadGPR(), 0)).label();
-#else
-    m_tagLoadOrStore = jit.store32WithAddressOffsetPatch(
-        m_value.tagGPR(), MacroAssembler::Address(m_base.payloadGPR(), 0)).label();
-    m_loadOrStore = jit.store32WithAddressOffsetPatch(
-        m_value.payloadGPR(), MacroAssembler::Address(m_base.payloadGPR(), 0)).label();
-#endif
-    
-    m_done = jit.label();
+    generateFastCommon(jit, InlineAccess::sizeForPropertyReplace());
 }
 
 V_JITOperation_ESsiJJI JITPutByIdGenerator::slowPathFunction()

Modified: trunk/Source/_javascript_Core/jit/JITInlineCacheGenerator.h (202213 => 202214)


--- trunk/Source/_javascript_Core/jit/JITInlineCacheGenerator.h	2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/_javascript_Core/jit/JITInlineCacheGenerator.h	2016-06-19 19:42:18 UTC (rev 202214)
@@ -68,30 +68,30 @@
     void reportSlowPathCall(MacroAssembler::Label slowPathBegin, MacroAssembler::Call call)
     {
         m_slowPathBegin = slowPathBegin;
-        m_call = call;
+        m_slowPathCall = call;
     }
     
     MacroAssembler::Label slowPathBegin() const { return m_slowPathBegin; }
-    MacroAssembler::Jump slowPathJump() const { return m_structureCheck.m_jump; }
+    MacroAssembler::Jump slowPathJump() const
+    {
+        ASSERT(m_slowPathJump.isSet());
+        return m_slowPathJump;
+    }
 
     void finalize(LinkBuffer& fastPathLinkBuffer, LinkBuffer& slowPathLinkBuffer);
     void finalize(LinkBuffer&);
     
 protected:
-    void generateFastPathChecks(MacroAssembler&);
+    void generateFastCommon(MacroAssembler&, size_t size);
     
     JSValueRegs m_base;
     JSValueRegs m_value;
     
-    MacroAssembler::DataLabel32 m_structureImm;
-    MacroAssembler::PatchableJump m_structureCheck;
-    AssemblerLabel m_loadOrStore;
-#if USE(JSVALUE32_64)
-    AssemblerLabel m_tagLoadOrStore;
-#endif
+    MacroAssembler::Label m_start;
     MacroAssembler::Label m_done;
     MacroAssembler::Label m_slowPathBegin;
-    MacroAssembler::Call m_call;
+    MacroAssembler::Call m_slowPathCall;
+    MacroAssembler::Jump m_slowPathJump;
 };
 
 class JITGetByIdGenerator : public JITByIdGenerator {
@@ -99,10 +99,13 @@
     JITGetByIdGenerator() { }
 
     JITGetByIdGenerator(
-        CodeBlock*, CodeOrigin, CallSiteIndex, const RegisterSet& usedRegisters, JSValueRegs base,
-        JSValueRegs value, AccessType);
+        CodeBlock*, CodeOrigin, CallSiteIndex, const RegisterSet& usedRegisters, UniquedStringImpl* propertyName,
+        JSValueRegs base, JSValueRegs value, AccessType);
     
     void generateFastPath(MacroAssembler&);
+
+private:
+    bool m_isLengthAccess;
 };
 
 class JITPutByIdGenerator : public JITByIdGenerator {

Modified: trunk/Source/_javascript_Core/jit/JITPropertyAccess.cpp (202213 => 202214)


--- trunk/Source/_javascript_Core/jit/JITPropertyAccess.cpp	2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/_javascript_Core/jit/JITPropertyAccess.cpp	2016-06-19 19:42:18 UTC (rev 202214)
@@ -222,7 +222,7 @@
 
     JITGetByIdGenerator gen(
         m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(m_bytecodeOffset), RegisterSet::stubUnavailableRegisters(),
-        JSValueRegs(regT0), JSValueRegs(regT0), AccessType::Get);
+        propertyName.impl(), JSValueRegs(regT0), JSValueRegs(regT0), AccessType::Get);
     gen.generateFastPath(*this);
 
     fastDoneCase = jump();
@@ -571,6 +571,7 @@
 {
     int resultVReg = currentInstruction[1].u.operand;
     int baseVReg = currentInstruction[2].u.operand;
+    const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[3].u.operand));
 
     emitGetVirtualRegister(baseVReg, regT0);
 
@@ -578,7 +579,7 @@
 
     JITGetByIdGenerator gen(
         m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(m_bytecodeOffset), RegisterSet::stubUnavailableRegisters(),
-        JSValueRegs(regT0), JSValueRegs(regT0), AccessType::GetPure);
+        ident->impl(), JSValueRegs(regT0), JSValueRegs(regT0), AccessType::GetPure);
     gen.generateFastPath(*this);
     addSlowCase(gen.slowPathJump());
     m_getByIds.append(gen);
@@ -619,7 +620,7 @@
 
     JITGetByIdGenerator gen(
         m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(m_bytecodeOffset), RegisterSet::stubUnavailableRegisters(),
-        JSValueRegs(regT0), JSValueRegs(regT0), AccessType::Get);
+        ident->impl(), JSValueRegs(regT0), JSValueRegs(regT0), AccessType::Get);
     gen.generateFastPath(*this);
     addSlowCase(gen.slowPathJump());
     m_getByIds.append(gen);

Modified: trunk/Source/_javascript_Core/jit/JITPropertyAccess32_64.cpp (202213 => 202214)


--- trunk/Source/_javascript_Core/jit/JITPropertyAccess32_64.cpp	2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/_javascript_Core/jit/JITPropertyAccess32_64.cpp	2016-06-19 19:42:18 UTC (rev 202214)
@@ -292,7 +292,7 @@
 
     JITGetByIdGenerator gen(
         m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(currentInstruction), RegisterSet::stubUnavailableRegisters(),
-        JSValueRegs::payloadOnly(regT0), JSValueRegs(regT1, regT0), AccessType::Get);
+        propertyName.impl(), JSValueRegs::payloadOnly(regT0), JSValueRegs(regT1, regT0), AccessType::Get);
     gen.generateFastPath(*this);
 
     fastDoneCase = jump();
@@ -587,6 +587,7 @@
 {
     int dst = currentInstruction[1].u.operand;
     int base = currentInstruction[2].u.operand;
+    const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[3].u.operand));
 
     emitLoad(base, regT1, regT0);
     emitJumpSlowCaseIfNotJSCell(base, regT1);
@@ -593,7 +594,7 @@
 
     JITGetByIdGenerator gen(
         m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(currentInstruction), RegisterSet::stubUnavailableRegisters(),
-        JSValueRegs::payloadOnly(regT0), JSValueRegs(regT1, regT0), AccessType::GetPure);
+        ident->impl(), JSValueRegs::payloadOnly(regT0), JSValueRegs(regT1, regT0), AccessType::GetPure);
     gen.generateFastPath(*this);
     addSlowCase(gen.slowPathJump());
     m_getByIds.append(gen);
@@ -634,7 +635,7 @@
 
     JITGetByIdGenerator gen(
         m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(currentInstruction), RegisterSet::stubUnavailableRegisters(),
-        JSValueRegs::payloadOnly(regT0), JSValueRegs(regT1, regT0), AccessType::Get);
+        ident->impl(), JSValueRegs::payloadOnly(regT0), JSValueRegs(regT1, regT0), AccessType::Get);
     gen.generateFastPath(*this);
     addSlowCase(gen.slowPathJump());
     m_getByIds.append(gen);

Modified: trunk/Source/_javascript_Core/jit/Repatch.cpp (202213 => 202214)


--- trunk/Source/_javascript_Core/jit/Repatch.cpp	2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/_javascript_Core/jit/Repatch.cpp	2016-06-19 19:42:18 UTC (rev 202214)
@@ -38,6 +38,7 @@
 #include "GCAwareJITStubRoutine.h"
 #include "GetterSetter.h"
 #include "ICStats.h"
+#include "InlineAccess.h"
 #include "JIT.h"
 #include "JITInlines.h"
 #include "LinkBuffer.h"
@@ -90,95 +91,6 @@
     MacroAssembler::repatchCall(call, newCalleeFunction);
 }
 
-static void repatchByIdSelfAccess(
-    CodeBlock* codeBlock, StructureStubInfo& stubInfo, Structure* structure,
-    PropertyOffset offset, const FunctionPtr& slowPathFunction,
-    bool compact)
-{
-    // Only optimize once!
-    repatchCall(codeBlock, stubInfo.callReturnLocation, slowPathFunction);
-
-    // Patch the structure check & the offset of the load.
-    MacroAssembler::repatchInt32(
-        stubInfo.callReturnLocation.dataLabel32AtOffset(-(intptr_t)stubInfo.patch.deltaCheckImmToCall),
-        bitwise_cast<int32_t>(structure->id()));
-#if USE(JSVALUE64)
-    if (compact)
-        MacroAssembler::repatchCompact(stubInfo.callReturnLocation.dataLabelCompactAtOffset(stubInfo.patch.deltaCallToLoadOrStore), offsetRelativeToBase(offset));
-    else
-        MacroAssembler::repatchInt32(stubInfo.callReturnLocation.dataLabel32AtOffset(stubInfo.patch.deltaCallToLoadOrStore), offsetRelativeToBase(offset));
-#elif USE(JSVALUE32_64)
-    if (compact) {
-        MacroAssembler::repatchCompact(stubInfo.callReturnLocation.dataLabelCompactAtOffset(stubInfo.patch.deltaCallToTagLoadOrStore), offsetRelativeToBase(offset) + OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.tag));
-        MacroAssembler::repatchCompact(stubInfo.callReturnLocation.dataLabelCompactAtOffset(stubInfo.patch.deltaCallToPayloadLoadOrStore), offsetRelativeToBase(offset) + OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.payload));
-    } else {
-        MacroAssembler::repatchInt32(stubInfo.callReturnLocation.dataLabel32AtOffset(stubInfo.patch.deltaCallToTagLoadOrStore), offsetRelativeToBase(offset) + OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.tag));
-        MacroAssembler::repatchInt32(stubInfo.callReturnLocation.dataLabel32AtOffset(stubInfo.patch.deltaCallToPayloadLoadOrStore), offsetRelativeToBase(offset) + OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.payload));
-    }
-#endif
-}
-
-static void resetGetByIDCheckAndLoad(StructureStubInfo& stubInfo)
-{
-    CodeLocationDataLabel32 structureLabel = stubInfo.callReturnLocation.dataLabel32AtOffset(-(intptr_t)stubInfo.patch.deltaCheckImmToCall);
-    if (MacroAssembler::canJumpReplacePatchableBranch32WithPatch()) {
-        MacroAssembler::revertJumpReplacementToPatchableBranch32WithPatch(
-            MacroAssembler::startOfPatchableBranch32WithPatchOnAddress(structureLabel),
-            MacroAssembler::Address(
-                static_cast<MacroAssembler::RegisterID>(stubInfo.patch.baseGPR),
-                JSCell::structureIDOffset()),
-            static_cast<int32_t>(unusedPointer));
-    }
-    MacroAssembler::repatchInt32(structureLabel, static_cast<int32_t>(unusedPointer));
-#if USE(JSVALUE64)
-    MacroAssembler::repatchCompact(stubInfo.callReturnLocation.dataLabelCompactAtOffset(stubInfo.patch.deltaCallToLoadOrStore), 0);
-#else
-    MacroAssembler::repatchCompact(stubInfo.callReturnLocation.dataLabelCompactAtOffset(stubInfo.patch.deltaCallToTagLoadOrStore), 0);
-    MacroAssembler::repatchCompact(stubInfo.callReturnLocation.dataLabelCompactAtOffset(stubInfo.patch.deltaCallToPayloadLoadOrStore), 0);
-#endif
-}
-
-static void resetPutByIDCheckAndLoad(StructureStubInfo& stubInfo)
-{
-    CodeLocationDataLabel32 structureLabel = stubInfo.callReturnLocation.dataLabel32AtOffset(-(intptr_t)stubInfo.patch.deltaCheckImmToCall);
-    if (MacroAssembler::canJumpReplacePatchableBranch32WithPatch()) {
-        MacroAssembler::revertJumpReplacementToPatchableBranch32WithPatch(
-            MacroAssembler::startOfPatchableBranch32WithPatchOnAddress(structureLabel),
-            MacroAssembler::Address(
-                static_cast<MacroAssembler::RegisterID>(stubInfo.patch.baseGPR),
-                JSCell::structureIDOffset()),
-            static_cast<int32_t>(unusedPointer));
-    }
-    MacroAssembler::repatchInt32(structureLabel, static_cast<int32_t>(unusedPointer));
-#if USE(JSVALUE64)
-    MacroAssembler::repatchInt32(stubInfo.callReturnLocation.dataLabel32AtOffset(stubInfo.patch.deltaCallToLoadOrStore), 0);
-#else
-    MacroAssembler::repatchInt32(stubInfo.callReturnLocation.dataLabel32AtOffset(stubInfo.patch.deltaCallToTagLoadOrStore), 0);
-    MacroAssembler::repatchInt32(stubInfo.callReturnLocation.dataLabel32AtOffset(stubInfo.patch.deltaCallToPayloadLoadOrStore), 0);
-#endif
-}
-
-static void replaceWithJump(StructureStubInfo& stubInfo, const MacroAssemblerCodePtr target)
-{
-    RELEASE_ASSERT(target);
-    
-    if (MacroAssembler::canJumpReplacePatchableBranch32WithPatch()) {
-        MacroAssembler::replaceWithJump(
-            MacroAssembler::startOfPatchableBranch32WithPatchOnAddress(
-                stubInfo.callReturnLocation.dataLabel32AtOffset(
-                    -(intptr_t)stubInfo.patch.deltaCheckImmToCall)),
-            CodeLocationLabel(target));
-        return;
-    }
-
-    resetGetByIDCheckAndLoad(stubInfo);
-    
-    MacroAssembler::repatchJump(
-        stubInfo.callReturnLocation.jumpAtOffset(
-            stubInfo.patch.deltaCallToJump),
-        CodeLocationLabel(target));
-}
-
 enum InlineCacheAction {
     GiveUpOnCache,
     RetryCacheLater,
@@ -241,9 +153,21 @@
     std::unique_ptr<AccessCase> newCase;
 
     if (propertyName == vm.propertyNames->length) {
-        if (isJSArray(baseValue))
+        if (isJSArray(baseValue)) {
+            if (stubInfo.cacheType == CacheType::Unset
+                && slot.slotBase() == baseValue
+                && InlineAccess::isCacheableArrayLength(stubInfo, jsCast<JSArray*>(baseValue))) {
+
+                bool generatedCodeInline = InlineAccess::generateArrayLength(*codeBlock->vm(), stubInfo, jsCast<JSArray*>(baseValue));
+                if (generatedCodeInline) {
+                    repatchCall(codeBlock, stubInfo.slowPathCallLocation(), appropriateOptimizingGetByIdFunction(kind));
+                    stubInfo.initArrayLength();
+                    return RetryCacheLater;
+                }
+            }
+
             newCase = AccessCase::getLength(vm, codeBlock, AccessCase::ArrayLength);
-        else if (isJSString(baseValue))
+        } else if (isJSString(baseValue))
             newCase = AccessCase::getLength(vm, codeBlock, AccessCase::StringLength);
         else if (DirectArguments* arguments = jsDynamicCast<DirectArguments*>(baseValue)) {
             // If there were overrides, then we can handle this as a normal property load! Guarding
@@ -276,22 +200,23 @@
         InlineCacheAction action = "" baseCell);
         if (action != AttemptToCache)
             return action;
-        
+
         // Optimize self access.
         if (stubInfo.cacheType == CacheType::Unset
             && slot.isCacheableValue()
             && slot.slotBase() == baseValue
             && !slot.watchpointSet()
-            && isInlineOffset(slot.cachedOffset())
-            && MacroAssembler::isCompactPtrAlignedAddressOffset(maxOffsetRelativeToBase(slot.cachedOffset()))
-            && action == AttemptToCache
             && !structure->needImpurePropertyWatchpoint()
             && !loadTargetFromProxy) {
-            LOG_IC((ICEvent::GetByIdSelfPatch, structure->classInfo(), propertyName));
-            structure->startWatchingPropertyForReplacements(vm, slot.cachedOffset());
-            repatchByIdSelfAccess(codeBlock, stubInfo, structure, slot.cachedOffset(), appropriateOptimizingGetByIdFunction(kind), true);
-            stubInfo.initGetByIdSelf(codeBlock, structure, slot.cachedOffset());
-            return RetryCacheLater;
+
+            bool generatedCodeInline = InlineAccess::generateSelfPropertyAccess(*codeBlock->vm(), stubInfo, structure, slot.cachedOffset());
+            if (generatedCodeInline) {
+                LOG_IC((ICEvent::GetByIdSelfPatch, structure->classInfo(), propertyName));
+                structure->startWatchingPropertyForReplacements(vm, slot.cachedOffset());
+                repatchCall(codeBlock, stubInfo.slowPathCallLocation(), appropriateOptimizingGetByIdFunction(kind));
+                stubInfo.initGetByIdSelf(codeBlock, structure, slot.cachedOffset());
+                return RetryCacheLater;
+            }
         }
 
         PropertyOffset offset = slot.isUnset() ? invalidOffset : slot.cachedOffset();
@@ -370,7 +295,7 @@
         LOG_IC((ICEvent::GetByIdReplaceWithJump, baseValue.classInfoOrNull(), propertyName));
         
         RELEASE_ASSERT(result.code());
-        replaceWithJump(stubInfo, result.code());
+        InlineAccess::rewireStubAsJump(exec->vm(), stubInfo, CodeLocationLabel(result.code()));
     }
     
     return result.shouldGiveUpNow() ? GiveUpOnCache : RetryCacheLater;
@@ -382,7 +307,7 @@
     GCSafeConcurrentJITLocker locker(exec->codeBlock()->m_lock, exec->vm().heap);
     
     if (tryCacheGetByID(exec, baseValue, propertyName, slot, stubInfo, kind) == GiveUpOnCache)
-        repatchCall(exec->codeBlock(), stubInfo.callReturnLocation, appropriateGenericGetByIdFunction(kind));
+        repatchCall(exec->codeBlock(), stubInfo.slowPathCallLocation(), appropriateGenericGetByIdFunction(kind));
 }
 
 static V_JITOperation_ESsiJJI appropriateGenericPutByIdFunction(const PutPropertySlot &slot, PutKind putKind)
@@ -433,18 +358,17 @@
             structure->didCachePropertyReplacement(vm, slot.cachedOffset());
         
             if (stubInfo.cacheType == CacheType::Unset
-                && isInlineOffset(slot.cachedOffset())
-                && MacroAssembler::isPtrAlignedAddressOffset(maxOffsetRelativeToBase(slot.cachedOffset()))
+                && InlineAccess::canGenerateSelfPropertyReplace(stubInfo, slot.cachedOffset())
                 && !structure->needImpurePropertyWatchpoint()
                 && !structure->inferredTypeFor(ident.impl())) {
                 
-                LOG_IC((ICEvent::PutByIdSelfPatch, structure->classInfo(), ident));
-                
-                repatchByIdSelfAccess(
-                    codeBlock, stubInfo, structure, slot.cachedOffset(),
-                    appropriateOptimizingPutByIdFunction(slot, putKind), false);
-                stubInfo.initPutByIdReplace(codeBlock, structure, slot.cachedOffset());
-                return RetryCacheLater;
+                bool generatedCodeInline = InlineAccess::generateSelfPropertyReplace(vm, stubInfo, structure, slot.cachedOffset());
+                if (generatedCodeInline) {
+                    LOG_IC((ICEvent::PutByIdSelfPatch, structure->classInfo(), ident));
+                    repatchCall(codeBlock, stubInfo.slowPathCallLocation(), appropriateOptimizingPutByIdFunction(slot, putKind));
+                    stubInfo.initPutByIdReplace(codeBlock, structure, slot.cachedOffset());
+                    return RetryCacheLater;
+                }
             }
 
             newCase = AccessCase::replace(vm, codeBlock, structure, slot.cachedOffset());
@@ -524,11 +448,8 @@
         LOG_IC((ICEvent::PutByIdReplaceWithJump, structure->classInfo(), ident));
         
         RELEASE_ASSERT(result.code());
-        resetPutByIDCheckAndLoad(stubInfo);
-        MacroAssembler::repatchJump(
-            stubInfo.callReturnLocation.jumpAtOffset(
-                stubInfo.patch.deltaCallToJump),
-            CodeLocationLabel(result.code()));
+
+        InlineAccess::rewireStubAsJump(vm, stubInfo, CodeLocationLabel(result.code()));
     }
     
     return result.shouldGiveUpNow() ? GiveUpOnCache : RetryCacheLater;
@@ -540,7 +461,7 @@
     GCSafeConcurrentJITLocker locker(exec->codeBlock()->m_lock, exec->vm().heap);
     
     if (tryCachePutByID(exec, baseValue, structure, propertyName, slot, stubInfo, putKind) == GiveUpOnCache)
-        repatchCall(exec->codeBlock(), stubInfo.callReturnLocation, appropriateGenericPutByIdFunction(slot, putKind));
+        repatchCall(exec->codeBlock(), stubInfo.slowPathCallLocation(), appropriateGenericPutByIdFunction(slot, putKind));
 }
 
 static InlineCacheAction tryRepatchIn(
@@ -586,8 +507,9 @@
         LOG_IC((ICEvent::InReplaceWithJump, structure->classInfo(), ident));
         
         RELEASE_ASSERT(result.code());
+
         MacroAssembler::repatchJump(
-            stubInfo.callReturnLocation.jumpAtOffset(stubInfo.patch.deltaCallToJump),
+            stubInfo.patchableJumpForIn(),
             CodeLocationLabel(result.code()));
     }
     
@@ -600,7 +522,7 @@
 {
     SuperSamplerScope superSamplerScope(false);
     if (tryRepatchIn(exec, base, ident, wasFound, slot, stubInfo) == GiveUpOnCache)
-        repatchCall(exec->codeBlock(), stubInfo.callReturnLocation, operationIn);
+        repatchCall(exec->codeBlock(), stubInfo.slowPathCallLocation(), operationIn);
 }
 
 static void linkSlowFor(VM*, CallLinkInfo& callLinkInfo, MacroAssemblerCodeRef codeRef)
@@ -972,14 +894,13 @@
 
 void resetGetByID(CodeBlock* codeBlock, StructureStubInfo& stubInfo, GetByIDKind kind)
 {
-    repatchCall(codeBlock, stubInfo.callReturnLocation, appropriateOptimizingGetByIdFunction(kind));
-    resetGetByIDCheckAndLoad(stubInfo);
-    MacroAssembler::repatchJump(stubInfo.callReturnLocation.jumpAtOffset(stubInfo.patch.deltaCallToJump), stubInfo.callReturnLocation.labelAtOffset(stubInfo.patch.deltaCallToSlowCase));
+    repatchCall(codeBlock, stubInfo.slowPathCallLocation(), appropriateOptimizingGetByIdFunction(kind));
+    InlineAccess::rewireStubAsJump(*codeBlock->vm(), stubInfo, stubInfo.slowPathStartLocation());
 }
 
 void resetPutByID(CodeBlock* codeBlock, StructureStubInfo& stubInfo)
 {
-    V_JITOperation_ESsiJJI unoptimizedFunction = bitwise_cast<V_JITOperation_ESsiJJI>(readCallTarget(codeBlock, stubInfo.callReturnLocation).executableAddress());
+    V_JITOperation_ESsiJJI unoptimizedFunction = bitwise_cast<V_JITOperation_ESsiJJI>(readCallTarget(codeBlock, stubInfo.slowPathCallLocation()).executableAddress());
     V_JITOperation_ESsiJJI optimizedFunction;
     if (unoptimizedFunction == operationPutByIdStrict || unoptimizedFunction == operationPutByIdStrictOptimize)
         optimizedFunction = operationPutByIdStrictOptimize;
@@ -991,14 +912,14 @@
         ASSERT(unoptimizedFunction == operationPutByIdDirectNonStrict || unoptimizedFunction == operationPutByIdDirectNonStrictOptimize);
         optimizedFunction = operationPutByIdDirectNonStrictOptimize;
     }
-    repatchCall(codeBlock, stubInfo.callReturnLocation, optimizedFunction);
-    resetPutByIDCheckAndLoad(stubInfo);
-    MacroAssembler::repatchJump(stubInfo.callReturnLocation.jumpAtOffset(stubInfo.patch.deltaCallToJump), stubInfo.callReturnLocation.labelAtOffset(stubInfo.patch.deltaCallToSlowCase));
+
+    repatchCall(codeBlock, stubInfo.slowPathCallLocation(), optimizedFunction);
+    InlineAccess::rewireStubAsJump(*codeBlock->vm(), stubInfo, stubInfo.slowPathStartLocation());
 }
 
 void resetIn(CodeBlock*, StructureStubInfo& stubInfo)
 {
-    MacroAssembler::repatchJump(stubInfo.callReturnLocation.jumpAtOffset(stubInfo.patch.deltaCallToJump), stubInfo.callReturnLocation.labelAtOffset(stubInfo.patch.deltaCallToSlowCase));
+    MacroAssembler::repatchJump(stubInfo.patchableJumpForIn(), stubInfo.slowPathStartLocation());
 }
 
 } // namespace JSC
_______________________________________________
webkit-changes mailing list
webkit-changes@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-changes

Reply via email to