Title: [192135] trunk/Source/_javascript_Core
Revision
192135
Author
fpi...@apple.com
Date
2015-11-07 18:10:57 -0800 (Sat, 07 Nov 2015)

Log Message

B3->Air lowering should do pattern matching the old fashioned way
https://bugs.webkit.org/show_bug.cgi?id=150994

Reviewed by Geoffrey Garen.

When I first wrote the B3->Air lowering prototype, I was convinced that the patterns would get
so gnarly that we'd want a pattern language to write them in. So I made one, and that's what
the lowering has used. But as we've worked with the IR, we've found that it's very easy to
pattern match in C++ using the B3 API, and we've also found that most of the patterns we wrote
using the pattern language were mostly trivial. So this change removes the pattern match code
generator and the patterns files, and redoes the lowering using good old fashioned switch
statements. This actually reduces the total code of the lowering.

I also took the opportunity to refactoring UnOp and BinOp lowering. We had a lot of repetetive
code for 32-vs-64-bit opcode selection, so I factored that out into a helper. This also saves a
lot of code.

* CMakeLists.txt:
* DerivedSources.make:
* b3/B3AddressMatcher.patterns: Removed.
* b3/B3LowerToAir.cpp:
(JSC::B3::Air::LowerToAir::LowerToAir):
(JSC::B3::Air::LowerToAir::run):
(JSC::B3::Air::LowerToAir::highBitsAreZero):
(JSC::B3::Air::LowerToAir::tmp):
(JSC::B3::Air::LowerToAir::canBeInternal):
(JSC::B3::Air::LowerToAir::commitInternal):
(JSC::B3::Air::LowerToAir::crossesInterference):
(JSC::B3::Air::LowerToAir::effectiveAddr):
(JSC::B3::Air::LowerToAir::addr):
(JSC::B3::Air::LowerToAir::loadPromise):
(JSC::B3::Air::LowerToAir::imm):
(JSC::B3::Air::LowerToAir::immForMove):
(JSC::B3::Air::LowerToAir::immOrTmpForMove):
(JSC::B3::Air::LowerToAir::tryOpcodeForType):
(JSC::B3::Air::LowerToAir::opcodeForType):
(JSC::B3::Air::LowerToAir::appendUnOp):
(JSC::B3::Air::LowerToAir::appendBinOp):
(JSC::B3::Air::LowerToAir::appendShift):
(JSC::B3::Air::LowerToAir::tryAppendStoreUnOp):
(JSC::B3::Air::LowerToAir::tryAppendStoreBinOp):
(JSC::B3::Air::LowerToAir::append):
(JSC::B3::Air::LowerToAir::ensureSpecial):
(JSC::B3::Air::LowerToAir::fillStackmap):
(JSC::B3::Air::LowerToAir::createGenericCompare):
(JSC::B3::Air::LowerToAir::createBranch):
(JSC::B3::Air::LowerToAir::createCompare):
(JSC::B3::Air::LowerToAir::lower):
(JSC::B3::Air::LowerToAir::immOrTmp): Deleted.
(JSC::B3::Air::LowerToAir::AddressSelector::AddressSelector): Deleted.
(JSC::B3::Air::LowerToAir::AddressSelector::acceptRoot): Deleted.
(JSC::B3::Air::LowerToAir::AddressSelector::acceptRootLate): Deleted.
(JSC::B3::Air::LowerToAir::AddressSelector::acceptInternals): Deleted.
(JSC::B3::Air::LowerToAir::AddressSelector::acceptInternalsLate): Deleted.
(JSC::B3::Air::LowerToAir::AddressSelector::acceptOperands): Deleted.
(JSC::B3::Air::LowerToAir::AddressSelector::acceptOperandsLate): Deleted.
(JSC::B3::Air::LowerToAir::AddressSelector::tryAddShift1): Deleted.
(JSC::B3::Air::LowerToAir::AddressSelector::tryAddShift2): Deleted.
(JSC::B3::Air::LowerToAir::AddressSelector::tryAdd): Deleted.
(JSC::B3::Air::LowerToAir::AddressSelector::tryFramePointer): Deleted.
(JSC::B3::Air::LowerToAir::AddressSelector::tryStackSlot): Deleted.
(JSC::B3::Air::LowerToAir::AddressSelector::tryDirect): Deleted.
(JSC::B3::Air::LowerToAir::acceptRoot): Deleted.
(JSC::B3::Air::LowerToAir::acceptRootLate): Deleted.
(JSC::B3::Air::LowerToAir::acceptInternals): Deleted.
(JSC::B3::Air::LowerToAir::acceptInternalsLate): Deleted.
(JSC::B3::Air::LowerToAir::acceptOperands): Deleted.
(JSC::B3::Air::LowerToAir::acceptOperandsLate): Deleted.
(JSC::B3::Air::LowerToAir::tryLoad): Deleted.
(JSC::B3::Air::LowerToAir::tryLoad8S): Deleted.
(JSC::B3::Air::LowerToAir::tryLoad8Z): Deleted.
(JSC::B3::Air::LowerToAir::tryLoad16S): Deleted.
(JSC::B3::Air::LowerToAir::tryLoad16Z): Deleted.
(JSC::B3::Air::LowerToAir::tryAdd): Deleted.
(JSC::B3::Air::LowerToAir::trySub): Deleted.
(JSC::B3::Air::LowerToAir::tryAnd): Deleted.
(JSC::B3::Air::LowerToAir::tryOr): Deleted.
(JSC::B3::Air::LowerToAir::tryXor): Deleted.
(JSC::B3::Air::LowerToAir::tryShl): Deleted.
(JSC::B3::Air::LowerToAir::trySShr): Deleted.
(JSC::B3::Air::LowerToAir::tryZShr): Deleted.
(JSC::B3::Air::LowerToAir::tryStoreAddLoad): Deleted.
(JSC::B3::Air::LowerToAir::tryStoreSubLoad): Deleted.
(JSC::B3::Air::LowerToAir::tryStoreAndLoad): Deleted.
(JSC::B3::Air::LowerToAir::tryStore): Deleted.
(JSC::B3::Air::LowerToAir::tryTrunc): Deleted.
(JSC::B3::Air::LowerToAir::tryZExt32): Deleted.
(JSC::B3::Air::LowerToAir::tryArgumentReg): Deleted.
(JSC::B3::Air::LowerToAir::tryConst32): Deleted.
(JSC::B3::Air::LowerToAir::tryConst64): Deleted.
(JSC::B3::Air::LowerToAir::tryFramePointer): Deleted.
(JSC::B3::Air::LowerToAir::tryStackSlot): Deleted.
(JSC::B3::Air::LowerToAir::tryEqual): Deleted.
(JSC::B3::Air::LowerToAir::tryNotEqual): Deleted.
(JSC::B3::Air::LowerToAir::tryLessThan): Deleted.
(JSC::B3::Air::LowerToAir::tryGreaterThan): Deleted.
(JSC::B3::Air::LowerToAir::tryLessEqual): Deleted.
(JSC::B3::Air::LowerToAir::tryGreaterEqual): Deleted.
(JSC::B3::Air::LowerToAir::tryAbove): Deleted.
(JSC::B3::Air::LowerToAir::tryBelow): Deleted.
(JSC::B3::Air::LowerToAir::tryAboveEqual): Deleted.
(JSC::B3::Air::LowerToAir::tryBelowEqual): Deleted.
(JSC::B3::Air::LowerToAir::tryPatchpoint): Deleted.
(JSC::B3::Air::LowerToAir::tryCheck): Deleted.
(JSC::B3::Air::LowerToAir::tryUpsilon): Deleted.
(JSC::B3::Air::LowerToAir::tryPhi): Deleted.
(JSC::B3::Air::LowerToAir::tryBranch): Deleted.
(JSC::B3::Air::LowerToAir::tryJump): Deleted.
(JSC::B3::Air::LowerToAir::tryIdentity): Deleted.
(JSC::B3::Air::LowerToAir::tryReturn): Deleted.
* b3/B3LoweringMatcher.patterns: Removed.
* b3/generate_pattern_matcher.rb: Removed.

Modified Paths

Removed Paths

Diff

Modified: trunk/Source/_javascript_Core/CMakeLists.txt (192134 => 192135)


--- trunk/Source/_javascript_Core/CMakeLists.txt	2015-11-08 01:29:22 UTC (rev 192134)
+++ trunk/Source/_javascript_Core/CMakeLists.txt	2015-11-08 02:10:57 UTC (rev 192135)
@@ -1223,22 +1223,6 @@
 )
 
 add_custom_command(
-    OUTPUT ${DERIVED_SOURCES_JAVASCRIPTCORE_DIR}/B3AddressMatcher.h
-    MAIN_DEPENDENCY ${_javascript_CORE_DIR}/b3/B3AddressMatcher.patterns
-    DEPENDS ${_javascript_CORE_DIR}/b3/generate_pattern_matcher.rb
-    COMMAND ${RUBY_EXECUTABLE} ${_javascript_CORE_DIR}/b3/generate_pattern_matcher.rb ${_javascript_CORE_DIR}/b3/B3AddressMatcher.patterns VERBATIM
-    WORKING_DIRECTORY ${DERIVED_SOURCES_JAVASCRIPTCORE_DIR}
-)
-
-add_custom_command(
-    OUTPUT ${DERIVED_SOURCES_JAVASCRIPTCORE_DIR}/B3LoweringMatcher.h
-    MAIN_DEPENDENCY ${_javascript_CORE_DIR}/b3/B3LoweringMatcher.patterns
-    DEPENDS ${_javascript_CORE_DIR}/b3/generate_pattern_matcher.rb
-    COMMAND ${RUBY_EXECUTABLE} ${_javascript_CORE_DIR}/b3/generate_pattern_matcher.rb ${_javascript_CORE_DIR}/b3/B3LoweringMatcher.patterns VERBATIM
-    WORKING_DIRECTORY ${DERIVED_SOURCES_JAVASCRIPTCORE_DIR}
-)
-
-add_custom_command(
     OUTPUT ${DERIVED_SOURCES_JAVASCRIPTCORE_DIR}/AirOpcode.h ${DERIVED_SOURCES_JAVASCRIPTCORE_DIR}/AirOpcodeGenerated.h
     MAIN_DEPENDENCY ${_javascript_CORE_DIR}/b3/air/AirOpcode.opcodes
     DEPENDS ${_javascript_CORE_DIR}/b3/air/opcode_generator.rb
@@ -1249,8 +1233,6 @@
 list(APPEND _javascript_Core_SOURCES
     ${DERIVED_SOURCES_JAVASCRIPTCORE_DIR}/AirOpcode.h
     ${DERIVED_SOURCES_JAVASCRIPTCORE_DIR}/AirOpcodeGenerated.h
-    ${DERIVED_SOURCES_JAVASCRIPTCORE_DIR}/B3AddressMatcher.h
-    ${DERIVED_SOURCES_JAVASCRIPTCORE_DIR}/B3LoweringMatcher.h
 )
 
 add_custom_command(

Modified: trunk/Source/_javascript_Core/ChangeLog (192134 => 192135)


--- trunk/Source/_javascript_Core/ChangeLog	2015-11-08 01:29:22 UTC (rev 192134)
+++ trunk/Source/_javascript_Core/ChangeLog	2015-11-08 02:10:57 UTC (rev 192135)
@@ -1,3 +1,118 @@
+2015-11-06  Filip Pizlo  <fpi...@apple.com>
+
+        B3->Air lowering should do pattern matching the old fashioned way
+        https://bugs.webkit.org/show_bug.cgi?id=150994
+
+        Reviewed by Geoffrey Garen.
+
+        When I first wrote the B3->Air lowering prototype, I was convinced that the patterns would get
+        so gnarly that we'd want a pattern language to write them in. So I made one, and that's what
+        the lowering has used. But as we've worked with the IR, we've found that it's very easy to
+        pattern match in C++ using the B3 API, and we've also found that most of the patterns we wrote
+        using the pattern language were mostly trivial. So this change removes the pattern match code
+        generator and the patterns files, and redoes the lowering using good old fashioned switch
+        statements. This actually reduces the total code of the lowering.
+
+        I also took the opportunity to refactoring UnOp and BinOp lowering. We had a lot of repetetive
+        code for 32-vs-64-bit opcode selection, so I factored that out into a helper. This also saves a
+        lot of code.
+
+        * CMakeLists.txt:
+        * DerivedSources.make:
+        * b3/B3AddressMatcher.patterns: Removed.
+        * b3/B3LowerToAir.cpp:
+        (JSC::B3::Air::LowerToAir::LowerToAir):
+        (JSC::B3::Air::LowerToAir::run):
+        (JSC::B3::Air::LowerToAir::highBitsAreZero):
+        (JSC::B3::Air::LowerToAir::tmp):
+        (JSC::B3::Air::LowerToAir::canBeInternal):
+        (JSC::B3::Air::LowerToAir::commitInternal):
+        (JSC::B3::Air::LowerToAir::crossesInterference):
+        (JSC::B3::Air::LowerToAir::effectiveAddr):
+        (JSC::B3::Air::LowerToAir::addr):
+        (JSC::B3::Air::LowerToAir::loadPromise):
+        (JSC::B3::Air::LowerToAir::imm):
+        (JSC::B3::Air::LowerToAir::immForMove):
+        (JSC::B3::Air::LowerToAir::immOrTmpForMove):
+        (JSC::B3::Air::LowerToAir::tryOpcodeForType):
+        (JSC::B3::Air::LowerToAir::opcodeForType):
+        (JSC::B3::Air::LowerToAir::appendUnOp):
+        (JSC::B3::Air::LowerToAir::appendBinOp):
+        (JSC::B3::Air::LowerToAir::appendShift):
+        (JSC::B3::Air::LowerToAir::tryAppendStoreUnOp):
+        (JSC::B3::Air::LowerToAir::tryAppendStoreBinOp):
+        (JSC::B3::Air::LowerToAir::append):
+        (JSC::B3::Air::LowerToAir::ensureSpecial):
+        (JSC::B3::Air::LowerToAir::fillStackmap):
+        (JSC::B3::Air::LowerToAir::createGenericCompare):
+        (JSC::B3::Air::LowerToAir::createBranch):
+        (JSC::B3::Air::LowerToAir::createCompare):
+        (JSC::B3::Air::LowerToAir::lower):
+        (JSC::B3::Air::LowerToAir::immOrTmp): Deleted.
+        (JSC::B3::Air::LowerToAir::AddressSelector::AddressSelector): Deleted.
+        (JSC::B3::Air::LowerToAir::AddressSelector::acceptRoot): Deleted.
+        (JSC::B3::Air::LowerToAir::AddressSelector::acceptRootLate): Deleted.
+        (JSC::B3::Air::LowerToAir::AddressSelector::acceptInternals): Deleted.
+        (JSC::B3::Air::LowerToAir::AddressSelector::acceptInternalsLate): Deleted.
+        (JSC::B3::Air::LowerToAir::AddressSelector::acceptOperands): Deleted.
+        (JSC::B3::Air::LowerToAir::AddressSelector::acceptOperandsLate): Deleted.
+        (JSC::B3::Air::LowerToAir::AddressSelector::tryAddShift1): Deleted.
+        (JSC::B3::Air::LowerToAir::AddressSelector::tryAddShift2): Deleted.
+        (JSC::B3::Air::LowerToAir::AddressSelector::tryAdd): Deleted.
+        (JSC::B3::Air::LowerToAir::AddressSelector::tryFramePointer): Deleted.
+        (JSC::B3::Air::LowerToAir::AddressSelector::tryStackSlot): Deleted.
+        (JSC::B3::Air::LowerToAir::AddressSelector::tryDirect): Deleted.
+        (JSC::B3::Air::LowerToAir::acceptRoot): Deleted.
+        (JSC::B3::Air::LowerToAir::acceptRootLate): Deleted.
+        (JSC::B3::Air::LowerToAir::acceptInternals): Deleted.
+        (JSC::B3::Air::LowerToAir::acceptInternalsLate): Deleted.
+        (JSC::B3::Air::LowerToAir::acceptOperands): Deleted.
+        (JSC::B3::Air::LowerToAir::acceptOperandsLate): Deleted.
+        (JSC::B3::Air::LowerToAir::tryLoad): Deleted.
+        (JSC::B3::Air::LowerToAir::tryLoad8S): Deleted.
+        (JSC::B3::Air::LowerToAir::tryLoad8Z): Deleted.
+        (JSC::B3::Air::LowerToAir::tryLoad16S): Deleted.
+        (JSC::B3::Air::LowerToAir::tryLoad16Z): Deleted.
+        (JSC::B3::Air::LowerToAir::tryAdd): Deleted.
+        (JSC::B3::Air::LowerToAir::trySub): Deleted.
+        (JSC::B3::Air::LowerToAir::tryAnd): Deleted.
+        (JSC::B3::Air::LowerToAir::tryOr): Deleted.
+        (JSC::B3::Air::LowerToAir::tryXor): Deleted.
+        (JSC::B3::Air::LowerToAir::tryShl): Deleted.
+        (JSC::B3::Air::LowerToAir::trySShr): Deleted.
+        (JSC::B3::Air::LowerToAir::tryZShr): Deleted.
+        (JSC::B3::Air::LowerToAir::tryStoreAddLoad): Deleted.
+        (JSC::B3::Air::LowerToAir::tryStoreSubLoad): Deleted.
+        (JSC::B3::Air::LowerToAir::tryStoreAndLoad): Deleted.
+        (JSC::B3::Air::LowerToAir::tryStore): Deleted.
+        (JSC::B3::Air::LowerToAir::tryTrunc): Deleted.
+        (JSC::B3::Air::LowerToAir::tryZExt32): Deleted.
+        (JSC::B3::Air::LowerToAir::tryArgumentReg): Deleted.
+        (JSC::B3::Air::LowerToAir::tryConst32): Deleted.
+        (JSC::B3::Air::LowerToAir::tryConst64): Deleted.
+        (JSC::B3::Air::LowerToAir::tryFramePointer): Deleted.
+        (JSC::B3::Air::LowerToAir::tryStackSlot): Deleted.
+        (JSC::B3::Air::LowerToAir::tryEqual): Deleted.
+        (JSC::B3::Air::LowerToAir::tryNotEqual): Deleted.
+        (JSC::B3::Air::LowerToAir::tryLessThan): Deleted.
+        (JSC::B3::Air::LowerToAir::tryGreaterThan): Deleted.
+        (JSC::B3::Air::LowerToAir::tryLessEqual): Deleted.
+        (JSC::B3::Air::LowerToAir::tryGreaterEqual): Deleted.
+        (JSC::B3::Air::LowerToAir::tryAbove): Deleted.
+        (JSC::B3::Air::LowerToAir::tryBelow): Deleted.
+        (JSC::B3::Air::LowerToAir::tryAboveEqual): Deleted.
+        (JSC::B3::Air::LowerToAir::tryBelowEqual): Deleted.
+        (JSC::B3::Air::LowerToAir::tryPatchpoint): Deleted.
+        (JSC::B3::Air::LowerToAir::tryCheck): Deleted.
+        (JSC::B3::Air::LowerToAir::tryUpsilon): Deleted.
+        (JSC::B3::Air::LowerToAir::tryPhi): Deleted.
+        (JSC::B3::Air::LowerToAir::tryBranch): Deleted.
+        (JSC::B3::Air::LowerToAir::tryJump): Deleted.
+        (JSC::B3::Air::LowerToAir::tryIdentity): Deleted.
+        (JSC::B3::Air::LowerToAir::tryReturn): Deleted.
+        * b3/B3LoweringMatcher.patterns: Removed.
+        * b3/generate_pattern_matcher.rb: Removed.
+
 2015-11-07  Michael Saboff  <msab...@apple.com>
 
         Add conditional moves to the MacroAssembler

Modified: trunk/Source/_javascript_Core/DerivedSources.make (192134 => 192135)


--- trunk/Source/_javascript_Core/DerivedSources.make	2015-11-08 01:29:22 UTC (rev 192134)
+++ trunk/Source/_javascript_Core/DerivedSources.make	2015-11-08 02:10:57 UTC (rev 192135)
@@ -60,8 +60,6 @@
     Lexer.lut.h \
     KeywordLookup.h \
     RegExpJitTables.h \
-    B3LoweringMatcher.h \
-    B3AddressMatcher.h \
     AirOpcode.h \
 #
 
@@ -265,11 +263,6 @@
 JSReplayInputs.h : $(INPUT_GENERATOR_SPECIFICATIONS) $(INPUT_GENERATOR_SCRIPTS)
 	$(PYTHON) $(_javascript_Core)/replay/scripts/CodeGeneratorReplayInputs.py --outputDir . --framework _javascript_Core $(INPUT_GENERATOR_SPECIFICATIONS)
 
-B3LoweringMatcher.h: $(_javascript_Core)/b3/generate_pattern_matcher.rb $(_javascript_Core)/b3/B3LoweringMatcher.patterns
-	$(RUBY) $^ 
-B3AddressMatcher.h: $(_javascript_Core)/b3/generate_pattern_matcher.rb $(_javascript_Core)/b3/B3AddressMatcher.patterns
-	$(RUBY) $^
-
 AirOpcode.h: $(_javascript_Core)/b3/air/opcode_generator.rb $(_javascript_Core)/b3/air/AirOpcode.opcodes
 	$(RUBY) $^
 

Deleted: trunk/Source/_javascript_Core/b3/B3AddressMatcher.patterns (192134 => 192135)


--- trunk/Source/_javascript_Core/b3/B3AddressMatcher.patterns	2015-11-08 01:29:22 UTC (rev 192134)
+++ trunk/Source/_javascript_Core/b3/B3AddressMatcher.patterns	2015-11-08 02:10:57 UTC (rev 192135)
@@ -1,42 +0,0 @@
-# Copyright (C) 2015 Apple Inc. All rights reserved.
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions
-# are met:
-# 1. Redistributions of source code must retain the above copyright
-#    notice, this list of conditions and the following disclaimer.
-# 2. Redistributions in binary form must reproduce the above copyright
-#    notice, this list of conditions and the following disclaimer in the
-#    documentation and/or other materials provided with the distribution.
-#
-# THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
-# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
-# THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
-# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
-# BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
-# THE POSSIBILITY OF SUCH DAMAGE.
-
-matcher AddressMatcher
-
-# Note that this relies on B3 doing aggressive reassociation and canoncalization of integer adds.
-# We'll never see Add(Add(x, const1), const2) because it would have already been reassociated into
-# Add(x, const1 + const2). Also, we will never see Add(const, x), because we would have canonicalized
-# it to Add(x, const). That said, in this code we never make assertions about these optimizations
-# having been performed. Also, since the Load/Store instructions in B3 have a built-in offset, we
-# don't need to match Add(x, const). That would have been folded into the Load/Store instructions
-# already. So, the only fancy mathing this can do is just BaseIndex forms.
-
-AddShift1 = Add(Shl(index, $logScale), base)
-AddShift2 = Add(base, Shl(index, $logScale))
-
-Add = Add(left, right)
-
-FramePointer = FramePointer()
-StackSlot = StackSlot()
-
-Direct

Modified: trunk/Source/_javascript_Core/b3/B3LowerToAir.cpp (192134 => 192135)


--- trunk/Source/_javascript_Core/b3/B3LowerToAir.cpp	2015-11-08 01:29:22 UTC (rev 192134)
+++ trunk/Source/_javascript_Core/b3/B3LowerToAir.cpp	2015-11-08 02:10:57 UTC (rev 192135)
@@ -32,14 +32,12 @@
 #include "AirInsertionSet.h"
 #include "AirInstInlines.h"
 #include "AirStackSlot.h"
-#include "B3AddressMatcher.h"
 #include "B3ArgumentRegValue.h"
 #include "B3BasicBlockInlines.h"
 #include "B3CheckSpecial.h"
 #include "B3Commutativity.h"
 #include "B3IndexMap.h"
 #include "B3IndexSet.h"
-#include "B3LoweringMatcher.h"
 #include "B3MemoryValue.h"
 #include "B3PatchpointSpecial.h"
 #include "B3PatchpointValue.h"
@@ -62,32 +60,31 @@
 class LowerToAir {
 public:
     LowerToAir(Procedure& procedure, Code& code)
-        : valueToTmp(procedure.values().size())
-        , blockToBlock(procedure.size())
-        , useCounts(procedure)
-        , addressSelector(*this)
-        , procedure(procedure)
-        , code(code)
+        : m_valueToTmp(procedure.values().size())
+        , m_blockToBlock(procedure.size())
+        , m_useCounts(procedure)
+        , m_procedure(procedure)
+        , m_code(code)
     {
     }
 
     void run()
     {
-        for (B3::BasicBlock* block : procedure)
-            blockToBlock[block] = code.addBlock(block->frequency());
-        for (Value* value : procedure.values()) {
+        for (B3::BasicBlock* block : m_procedure)
+            m_blockToBlock[block] = m_code.addBlock(block->frequency());
+        for (Value* value : m_procedure.values()) {
             if (StackSlotValue* stackSlotValue = value->as<StackSlotValue>())
-                stackToStack.add(stackSlotValue, code.addStackSlot(stackSlotValue));
+                m_stackToStack.add(stackSlotValue, m_code.addStackSlot(stackSlotValue));
         }
 
-        procedure.resetValueOwners(); // Used by crossesInterference().
+        m_procedure.resetValueOwners(); // Used by crossesInterference().
 
         // Lower defs before uses on a global level. This is a good heuristic to lock down a
         // hoisted address _expression_ before we duplicate it back into the loop.
-        for (B3::BasicBlock* block : procedure.blocksInPreOrder()) {
-            currentBlock = block;
+        for (B3::BasicBlock* block : m_procedure.blocksInPreOrder()) {
+            m_block = block;
             // Reset some state.
-            insts.resize(0);
+            m_insts.resize(0);
 
             if (verbose)
                 dataLog("Lowering Block ", *block, ":\n");
@@ -95,43 +92,40 @@
             // Process blocks in reverse order so we see uses before defs. That's what allows us
             // to match patterns effectively.
             for (unsigned i = block->size(); i--;) {
-                currentIndex = i;
-                currentValue = block->at(i);
-                if (locked.contains(currentValue))
+                m_index = i;
+                m_value = block->at(i);
+                if (m_locked.contains(m_value))
                     continue;
-                insts.append(Vector<Inst>());
+                m_insts.append(Vector<Inst>());
                 if (verbose)
-                    dataLog("Lowering ", deepDump(currentValue), ":\n");
-                bool result = runLoweringMatcher(currentValue, *this);
-                if (!result) {
-                    dataLog("FATAL: could not lower ", deepDump(currentValue), "\n");
-                    RELEASE_ASSERT_NOT_REACHED();
-                }
+                    dataLog("Lowering ", deepDump(m_value), ":\n");
+                lower();
                 if (verbose) {
-                    for (Inst& inst : insts.last())
+                    for (Inst& inst : m_insts.last())
                         dataLog("    ", inst, "\n");
                 }
             }
 
-            // Now append the instructions. insts contains them in reverse order, so we process
+            // Now append the instructions. m_insts contains them in reverse order, so we process
             // it in reverse.
-            for (unsigned i = insts.size(); i--;) {
-                for (Inst& inst : insts[i])
-                    blockToBlock[block]->appendInst(WTF::move(inst));
+            for (unsigned i = m_insts.size(); i--;) {
+                for (Inst& inst : m_insts[i])
+                    m_blockToBlock[block]->appendInst(WTF::move(inst));
             }
 
             // Make sure that the successors are set up correctly.
             ASSERT(block->successors().size() <= 2);
             for (B3::BasicBlock* successor : block->successorBlocks())
-                blockToBlock[block]->successors().append(blockToBlock[successor]);
+                m_blockToBlock[block]->successors().append(m_blockToBlock[successor]);
         }
 
-        Air::InsertionSet insertionSet(code);
-        for (Inst& inst : prologue)
+        Air::InsertionSet insertionSet(m_code);
+        for (Inst& inst : m_prologue)
             insertionSet.insertInst(0, WTF::move(inst));
-        insertionSet.execute(code[0]);
+        insertionSet.execute(m_code[0]);
     }
 
+private:
     bool highBitsAreZero(Value* value)
     {
         switch (value->opcode()) {
@@ -265,14 +259,14 @@
     //
     //     auto tryThings = [this] (const Arg& left, const Arg& right) {
     //         if (isValidForm(Foo, left.kind(), right.kind()))
-    //             return Inst(Foo, currentValue, left, right);
+    //             return Inst(Foo, m_value, left, right);
     //         return Inst();
     //     };
     //     if (Inst result = tryThings(loadAddr(left), tmp(right)))
     //         return result;
     //     if (Inst result = tryThings(tmp(left), loadAddr(right))) // this never succeeds.
     //         return result;
-    //     return Inst(Foo, currentValue, tmp(left), tmp(right));
+    //     return Inst(Foo, m_value, tmp(left), tmp(right));
     //
     // If you imagine that loadAddr(value) is just loadPromise(value).consume(*this), then this code
     // will run correctly - it will generate OK code - but the second form is never matched.
@@ -283,27 +277,27 @@
     //
     //     auto tryThings = [this] (const ArgPromise& left, const ArgPromise& right) {
     //         if (isValidForm(Foo, left.kind(), right.kind()))
-    //             return Inst(Foo, currentValue, left.consume(*this), right.consume(*this));
+    //             return Inst(Foo, m_value, left.consume(*this), right.consume(*this));
     //         return Inst();
     //     };
     //     if (Inst result = tryThings(loadPromise(left), tmpPromise(right)))
     //         return result;
     //     if (Inst result = tryThings(tmpPromise(left), loadPromise(right)))
     //         return result;
-    //     return Inst(Foo, currentValue, tmp(left), tmp(right));
+    //     return Inst(Foo, m_value, tmp(left), tmp(right));
     //
     // Notice that we did use tmp in the fall-back case at the end, because by then, we know for sure
     // that we want a tmp. But using tmpPromise in the tryThings() calls ensures that doing so
     // doesn't prevent us from trying loadPromise on the same value.
     Tmp tmp(Value* value)
     {
-        Tmp& tmp = valueToTmp[value];
+        Tmp& tmp = m_valueToTmp[value];
         if (!tmp) {
             while (shouldCopyPropagate(value))
                 value = value->child(0);
-            Tmp& realTmp = valueToTmp[value];
+            Tmp& realTmp = m_valueToTmp[value];
             if (!realTmp)
-                realTmp = code.newTmp(Arg::typeForB3Type(value->type()));
+                realTmp = m_code.newTmp(Arg::typeForB3Type(value->type()));
             tmp = realTmp;
         }
         return tmp;
@@ -318,11 +312,11 @@
     {
         // If one of the internal things has already been computed, then we don't want to cause
         // it to be recomputed again.
-        if (valueToTmp[value])
+        if (m_valueToTmp[value])
             return false;
         
         // We require internals to have only one use - us.
-        if (useCounts[value] != 1)
+        if (m_useCounts[value] != 1)
             return false;
 
         return true;
@@ -334,7 +328,7 @@
     // short, you should avoid this by using the pattern matcher to match patterns.
     void commitInternal(Value* value)
     {
-        locked.add(value);
+        m_locked.add(value);
     }
 
     bool crossesInterference(Value* value)
@@ -343,13 +337,13 @@
         // willing to do heavier analysis. For example, if we had liveness, then we could label
         // values as "crossing interference" if they interfere with anything that they are live
         // across. But, it's not clear how useful this would be.
-        if (value->owner != currentValue->owner)
+        if (value->owner != m_value->owner)
             return true;
 
         Effects effects = value->effects();
 
-        for (unsigned i = currentIndex; i--;) {
-            Value* otherValue = currentBlock->at(i);
+        for (unsigned i = m_index; i--;) {
+            Value* otherValue = m_block->at(i);
             if (otherValue == value)
                 return false;
             if (effects.interferes(otherValue->effects()))
@@ -363,23 +357,51 @@
     // This turns the given operand into an address.
     Arg effectiveAddr(Value* address)
     {
-        addressSelector.selectedAddress = Arg();
-        if (runAddressMatcher(address, addressSelector))
-            return addressSelector.selectedAddress;
+        // FIXME: Consider matching an address _expression_ even if we've already assigned a
+        // Tmp to it. https://bugs.webkit.org/show_bug.cgi?id=150777
+        if (m_valueToTmp[address])
+            return Arg::addr(tmp(address));
+        
+        switch (address->opcode()) {
+        case Add: {
+            Value* left = address->child(0);
+            Value* right = address->child(1);
 
-        return Arg::addr(tmp(address));
-    }
+            auto tryIndex = [&] (Value* index, Value* offset) -> Arg {
+                if (index->opcode() != Shl)
+                    return Arg();
+                if (m_locked.contains(index->child(0)) || m_locked.contains(offset))
+                    return Arg();
+                if (!index->child(1)->hasInt32())
+                    return Arg();
+                
+                unsigned scale = 1 << (index->child(1)->asInt32() & 31);
+                if (!Arg::isValidScale(scale))
+                    return Arg();
 
-    Arg effectiveAddr(Value* address, Value* memoryValue)
-    {
-        Arg result = effectiveAddr(address);
-        ASSERT(result);
+                return Arg::index(tmp(offset), tmp(index->child(0)), scale);
+            };
 
-        int32_t offset = memoryValue->as<MemoryValue>()->offset();
-        Arg offsetResult = result.withOffset(offset);
-        if (!offsetResult)
-            return Arg::addr(tmp(address), offset);
-        return offsetResult;
+            if (Arg result = tryIndex(left, right))
+                return result;
+            if (Arg result = tryIndex(right, left))
+                return result;
+
+            if (m_locked.contains(left) || m_locked.contains(right))
+                return Arg::addr(tmp(address));
+            
+            return Arg::index(tmp(left), tmp(right));
+        }
+
+        case FramePointer:
+            return Arg::addr(Tmp(GPRInfo::callFrameRegister));
+
+        case B3::StackSlot:
+            return Arg::stack(m_stackToStack.get(address->as<StackSlotValue>()));
+
+        default:
+            return Arg::addr(tmp(address));
+        }
     }
 
     // This gives you the address of the given Load or Store. If it's not a Load or Store, then
@@ -390,7 +412,14 @@
         if (!value)
             return Arg();
 
-        return effectiveAddr(value->lastChild(), value);
+        Arg result = effectiveAddr(value->lastChild());
+        ASSERT(result);
+        
+        int32_t offset = memoryValue->as<MemoryValue>()->offset();
+        Arg offsetResult = result.withOffset(offset);
+        if (!offsetResult)
+            return Arg::addr(tmp(value->lastChild()), offset);
+        return offsetResult;
     }
 
     ArgPromise loadPromise(Value* loadValue, B3::Opcode loadOpcode)
@@ -416,17 +445,58 @@
         return Arg();
     }
 
-    Arg immOrTmp(Value* value)
-    {
+    Arg immForMove(Value* value) {
         if (Arg result = imm(value))
             return result;
+        if (value->hasInt64())
+            return Arg::imm64(value->asInt64());
+        return Arg();
+    }
+
+    Arg immOrTmpForMove(Value* value)
+    {
+        if (Arg result = immForMove(value))
+            return result;
         return tmp(value);
     }
 
-    template<Air::Opcode opcode>
+    // By convention, we use Oops to mean "I don't know".
+    Air::Opcode tryOpcodeForType(
+        Air::Opcode opcode32, Air::Opcode opcode64, Air::Opcode opcodeDouble, Type type)
+    {
+        Air::Opcode opcode;
+        switch (type) {
+        case Int32:
+            opcode = opcode32;
+            break;
+        case Int64:
+            opcode = opcode64;
+            break;
+        case Double:
+            opcode = opcodeDouble;
+            break;
+        default:
+            opcode = Air::Oops;
+            break;
+        }
+
+        return opcode;
+    }
+
+    Air::Opcode opcodeForType(
+        Air::Opcode opcode32, Air::Opcode opcode64, Air::Opcode opcodeDouble, Type type)
+    {
+        Air::Opcode opcode = tryOpcodeForType(opcode32, opcode64, opcodeDouble, type);
+        RELEASE_ASSERT(opcode != Air::Oops);
+        return opcode;
+    }
+
+    template<Air::Opcode opcode32, Air::Opcode opcode64, Air::Opcode opcodeDouble>
     void appendUnOp(Value* value)
     {
-        Tmp result = tmp(currentValue);
+        Air::Opcode opcode = opcodeForType(opcode32, opcode64, opcodeDouble, value->type());
+        
+        Tmp result = tmp(m_value);
 
         // Two operand forms like:
         //     Op a, b
@@ -438,15 +508,19 @@
             return;
         }
 
-        append(relaxedMoveForType(currentValue->type()), tmp(value), result);
+        append(relaxedMoveForType(m_value->type()), tmp(value), result);
         append(opcode, result);
     }
 
-    template<Air::Opcode opcode, Commutativity commutativity = NotCommutative>
+    template<
+        Air::Opcode opcode32, Air::Opcode opcode64, Air::Opcode opcodeDouble,
+        Commutativity commutativity = NotCommutative>
     void appendBinOp(Value* left, Value* right)
     {
-        Tmp result = tmp(currentValue);
+        Air::Opcode opcode = opcodeForType(opcode32, opcode64, opcodeDouble, left->type());
         
+        Tmp result = tmp(m_value);
+        
         // Three-operand forms like:
         //     Op a, b, c
         // mean something like:
@@ -487,7 +561,7 @@
         if (commutativity == Commutative) {
             ArgPromise leftAddr = loadPromise(left);
             if (isValidForm(opcode, leftAddr.kind(), Arg::Tmp)) {
-                append(relaxedMoveForType(currentValue->type()), tmp(right), result);
+                append(relaxedMoveForType(m_value->type()), tmp(right), result);
                 append(opcode, leftAddr.consume(*this), result);
                 return;
             }
@@ -495,13 +569,13 @@
 
         ArgPromise rightAddr = loadPromise(right);
         if (isValidForm(opcode, rightAddr.kind(), Arg::Tmp)) {
-            append(relaxedMoveForType(currentValue->type()), tmp(left), result);
+            append(relaxedMoveForType(m_value->type()), tmp(left), result);
             append(opcode, rightAddr.consume(*this), result);
             return;
         }
 
         if (imm(right) && isValidForm(opcode, Arg::Imm, Arg::Tmp)) {
-            append(relaxedMoveForType(currentValue->type()), tmp(left), result);
+            append(relaxedMoveForType(m_value->type()), tmp(left), result);
             append(opcode, imm(right), result);
             return;
         }
@@ -511,29 +585,57 @@
             return;
         }
 
-        append(relaxedMoveForType(currentValue->type()), tmp(left), result);
+        append(relaxedMoveForType(m_value->type()), tmp(left), result);
         append(opcode, tmp(right), result);
     }
 
-    template<Air::Opcode opcode>
+    template<Air::Opcode opcode32, Air::Opcode opcode64>
+    void appendShift(Value* value, Value* amount)
+    {
+        Air::Opcode opcode = opcodeForType(opcode32, opcode64, Air::Oops, value->type());
+        
+        if (imm(amount)) {
+            append(Move, tmp(value), tmp(m_value));
+            append(opcode, imm(amount), tmp(m_value));
+            return;
+        }
+
+        append(Move, tmp(value), tmp(m_value));
+        append(Move, tmp(amount), Tmp(X86Registers::ecx));
+        append(opcode, Tmp(X86Registers::ecx), tmp(m_value));
+    }
+
+    template<Air::Opcode opcode32, Air::Opcode opcode64>
     bool tryAppendStoreUnOp(Value* value)
     {
-        Arg storeAddr = addr(currentValue);
+        Air::Opcode opcode = tryOpcodeForType(opcode32, opcode64, Air::Oops, value->type());
+        if (opcode == Air::Oops)
+            return false;
+        
+        Arg storeAddr = addr(m_value);
         ASSERT(storeAddr);
 
         ArgPromise loadPromise = this->loadPromise(value);
         if (loadPromise.peek() != storeAddr)
             return false;
 
+        if (!isValidForm(opcode, storeAddr.kind()))
+            return false;
+        
         loadPromise.consume(*this);
         append(opcode, storeAddr);
         return true;
     }
 
-    template<Air::Opcode opcode, Commutativity commutativity = NotCommutative>
+    template<
+        Air::Opcode opcode32, Air::Opcode opcode64, Commutativity commutativity = NotCommutative>
     bool tryAppendStoreBinOp(Value* left, Value* right)
     {
-        Arg storeAddr = addr(currentValue);
+        Air::Opcode opcode = tryOpcodeForType(opcode32, opcode64, Air::Oops, left->type());
+        if (opcode == Air::Oops)
+            return false;
+        
+        Arg storeAddr = addr(m_value);
         ASSERT(storeAddr);
 
         ArgPromise loadPromise;
@@ -608,7 +710,7 @@
     template<typename... Arguments>
     void append(Air::Opcode opcode, Arguments&&... arguments)
     {
-        insts.last().append(Inst(opcode, currentValue, std::forward<Arguments>(arguments)...));
+        m_insts.last().append(Inst(opcode, m_value, std::forward<Arguments>(arguments)...));
     }
 
     template<typename T, typename... Arguments>
@@ -616,7 +718,7 @@
     {
         if (!field) {
             field = static_cast<T*>(
-                code.addSpecial(std::make_unique<T>(std::forward<Arguments>(arguments)...)));
+                m_code.addSpecial(std::make_unique<T>(std::forward<Arguments>(arguments)...)));
         }
         return field;
     }
@@ -629,14 +731,14 @@
             Arg arg;
             switch (value.rep().kind()) {
             case ValueRep::Any:
-                arg = immOrTmp(value.value());
+                arg = immOrTmpForMove(value.value());
                 break;
             case ValueRep::SomeRegister:
                 arg = tmp(value.value());
                 break;
             case ValueRep::Register:
                 arg = Tmp(value.rep().reg());
-                append(Move, immOrTmp(value.value()), arg);
+                append(Move, immOrTmpForMove(value.value()), arg);
                 break;
             case ValueRep::StackArgument:
                 arg = Arg::callArg(value.rep().offsetFromSP());
@@ -650,126 +752,6 @@
         }
     }
     
-    IndexSet<Value> locked; // These are values that will have no Tmp in Air.
-    IndexMap<Value, Tmp> valueToTmp; // These are values that must have a Tmp in Air. We say that a Value* with a non-null Tmp is "pinned".
-    IndexMap<B3::BasicBlock, Air::BasicBlock*> blockToBlock;
-    HashMap<StackSlotValue*, Air::StackSlot*> stackToStack;
-
-    UseCounts useCounts;
-
-    Vector<Vector<Inst, 4>> insts;
-    Vector<Inst> prologue;
-
-    B3::BasicBlock* currentBlock;
-    unsigned currentIndex;
-    Value* currentValue;
-
-    // The address selector will match any pattern where the input operands are available as Tmps.
-    // It doesn't care about sharing. It will happily emit the same address _expression_ over and over
-    // again regardless of the _expression_'s complexity. This works out fine, since at the machine
-    // level, address expressions are super cheap. However, this does have a hack to avoid
-    // "unhoisting" address expressions.
-    class AddressSelector {
-    public:
-        AddressSelector(LowerToAir& lower)
-            : lower(lower)
-        {
-        }
-
-        bool acceptRoot(Value* root)
-        {
-            this->root = root;
-            // We don't want to match an address _expression_ that has already been computed
-            // explicitly. This mostly makes sense. Note that we never hoist basic address
-            // expressions like offset(base), because those are sunk into the MemoryValue. So,
-            // this is mainly just relevant to Index expressions and other more complicated
-            // things. It stands to reason that most of the time, we won't hoist an Index
-            // _expression_. And if we do, then probably it's a good thing to compute it outside
-            // a loop. That being said, this might turn into a problem. For example, it will
-            // require an extra register to be live. That's unlikely to be profitable.
-            // FIXME: Consider matching an address _expression_ even if we've already assigned a
-            // Tmp to it. https://bugs.webkit.org/show_bug.cgi?id=150777
-            return !lower.valueToTmp[root];
-        }
-
-        void acceptRootLate(Value*) { }
-
-        template<typename... Arguments>
-        bool acceptInternals(Value* value, Arguments... arguments)
-        {
-            if (lower.valueToTmp[value])
-                return false;
-            return acceptInternals(arguments...);
-        }
-        bool acceptInternals() { return true; }
-
-        template<typename... Arguments>
-        void acceptInternalsLate(Arguments...) { }
-
-        template<typename... Arguments>
-        bool acceptOperands(Value* value, Arguments... arguments)
-        {
-            if (lower.locked.contains(value))
-                return false;
-            return acceptOperands(arguments...);
-        }
-        bool acceptOperands() { return true; }
-
-        template<typename... Arguments>
-        void acceptOperandsLate(Arguments...) { }
-
-        bool tryAddShift1(Value* index, Value* logScale, Value* base)
-        {
-            if (logScale->asInt() < 0 || logScale->asInt() > 3)
-                return false;
-            selectedAddress = Arg::index(
-                lower.tmp(base), lower.tmp(index), 1 << logScale->asInt());
-            return true;
-        }
-
-        bool tryAddShift2(Value* base, Value* index, Value* logScale)
-        {
-            return tryAddShift1(index, logScale, base);
-        }
-
-        bool tryAdd(Value* left, Value* right)
-        {
-            if (right->hasInt32()) {
-                // This production isn't strictly necessary since we expect
-                // Load(Add(@x, @const1), offset = const2) to have already been strength-reduced
-                // to Load(@x, offset = const1 + const2).
-                selectedAddress = Arg::addr(lower.tmp(left), right->asInt32());
-                return true;
-            }
-
-            selectedAddress = Arg::index(lower.tmp(left), lower.tmp(right));
-            return true;
-        }
-
-        bool tryFramePointer()
-        {
-            selectedAddress = Arg::addr(Tmp(GPRInfo::callFrameRegister));
-            return true;
-        }
-
-        bool tryStackSlot()
-        {
-            selectedAddress = Arg::stack(lower.stackToStack.get(root->as<StackSlotValue>()));
-            return true;
-        }
-
-        bool tryDirect()
-        {
-            selectedAddress = Arg::addr(lower.tmp(root));
-            return true;
-        }
-        
-        LowerToAir& lower;
-        Value* root;
-        Arg selectedAddress;
-    };
-    AddressSelector addressSelector;
-
     // Create an Inst to do the comparison specified by the given value.
     template<typename CompareFunctor, typename TestFunctor, typename CompareDoubleFunctor>
     Inst createGenericCompare(
@@ -783,7 +765,7 @@
         // to follow the rule that the instruction selector reduces strength whenever it doesn't
         // require making things more complicated.
         for (;;) {
-            if (!canBeInternal(value) && value != currentValue)
+            if (!canBeInternal(value) && value != m_value)
                 break;
             bool shouldInvert =
                 (value->opcode() == BitXor && value->child(1)->isInt(1) && value->child(0)->returnsBool())
@@ -1021,7 +1003,7 @@
             }
         };
 
-        if (canBeInternal(value) || value == currentValue) {
+        if (canBeInternal(value) || value == m_value) {
             if (Inst result = attemptFused()) {
                 commitInternal(value);
                 return result;
@@ -1047,7 +1029,7 @@
                 case Arg::Width8:
                     if (isValidForm(Branch8, Arg::RelCond, left.kind(), right.kind())) {
                         return Inst(
-                            Branch8, currentValue, relCond,
+                            Branch8, m_value, relCond,
                             left.consume(*this), right.consume(*this));
                     }
                     return Inst();
@@ -1056,14 +1038,14 @@
                 case Arg::Width32:
                     if (isValidForm(Branch32, Arg::RelCond, left.kind(), right.kind())) {
                         return Inst(
-                            Branch32, currentValue, relCond,
+                            Branch32, m_value, relCond,
                             left.consume(*this), right.consume(*this));
                     }
                     return Inst();
                 case Arg::Width64:
                     if (isValidForm(Branch64, Arg::RelCond, left.kind(), right.kind())) {
                         return Inst(
-                            Branch64, currentValue, relCond,
+                            Branch64, m_value, relCond,
                             left.consume(*this), right.consume(*this));
                     }
                     return Inst();
@@ -1076,7 +1058,7 @@
                 case Arg::Width8:
                     if (isValidForm(BranchTest8, Arg::ResCond, left.kind(), right.kind())) {
                         return Inst(
-                            BranchTest8, currentValue, resCond,
+                            BranchTest8, m_value, resCond,
                             left.consume(*this), right.consume(*this));
                     }
                     return Inst();
@@ -1085,14 +1067,14 @@
                 case Arg::Width32:
                     if (isValidForm(BranchTest32, Arg::ResCond, left.kind(), right.kind())) {
                         return Inst(
-                            BranchTest32, currentValue, resCond,
+                            BranchTest32, m_value, resCond,
                             left.consume(*this), right.consume(*this));
                     }
                     return Inst();
                 case Arg::Width64:
                     if (isValidForm(BranchTest64, Arg::ResCond, left.kind(), right.kind())) {
                         return Inst(
-                            BranchTest64, currentValue, resCond,
+                            BranchTest64, m_value, resCond,
                             left.consume(*this), right.consume(*this));
                     }
                     return Inst();
@@ -1101,7 +1083,7 @@
             [this] (Arg doubleCond, const ArgPromise& left, const ArgPromise& right) -> Inst {
                 if (isValidForm(BranchDouble, Arg::DoubleCond, left.kind(), right.kind())) {
                     return Inst(
-                        BranchDouble, currentValue, doubleCond,
+                        BranchDouble, m_value, doubleCond,
                         left.consume(*this), right.consume(*this));
                 }
                 return Inst();
@@ -1123,15 +1105,15 @@
                 case Arg::Width32:
                     if (isValidForm(Compare32, Arg::RelCond, left.kind(), right.kind(), Arg::Tmp)) {
                         return Inst(
-                            Compare32, currentValue, relCond,
-                            left.consume(*this), right.consume(*this), tmp(currentValue));
+                            Compare32, m_value, relCond,
+                            left.consume(*this), right.consume(*this), tmp(m_value));
                     }
                     return Inst();
                 case Arg::Width64:
                     if (isValidForm(Compare64, Arg::RelCond, left.kind(), right.kind(), Arg::Tmp)) {
                         return Inst(
-                            Compare64, currentValue, relCond,
-                            left.consume(*this), right.consume(*this), tmp(currentValue));
+                            Compare64, m_value, relCond,
+                            left.consume(*this), right.consume(*this), tmp(m_value));
                     }
                     return Inst();
                 }
@@ -1146,15 +1128,15 @@
                 case Arg::Width32:
                     if (isValidForm(Test32, Arg::ResCond, left.kind(), right.kind(), Arg::Tmp)) {
                         return Inst(
-                            Test32, currentValue, resCond,
-                            left.consume(*this), right.consume(*this), tmp(currentValue));
+                            Test32, m_value, resCond,
+                            left.consume(*this), right.consume(*this), tmp(m_value));
                     }
                     return Inst();
                 case Arg::Width64:
                     if (isValidForm(Test64, Arg::ResCond, left.kind(), right.kind(), Arg::Tmp)) {
                         return Inst(
-                            Test64, currentValue, resCond,
-                            left.consume(*this), right.consume(*this), tmp(currentValue));
+                            Test64, m_value, resCond,
+                            left.consume(*this), right.consume(*this), tmp(m_value));
                     }
                     return Inst();
                 }
@@ -1167,433 +1149,272 @@
             inverted);
     }
 
-    // Below is the code for a lowering selector, so that we can pass *this to runLoweringMatcher.
-    // This will match complex multi-value expressions, but only if there is no sharing. For example,
-    // it won't match a Load twice and cause the generated code to do two loads when the original
-    // code just did one.
-    
-    bool acceptRoot(Value* root)
+    void lower()
     {
-        ASSERT_UNUSED(root, !locked.contains(root));
-        return true;
-    }
-    
-    void acceptRootLate(Value*) { }
-    
-    template<typename... Arguments>
-    bool acceptInternals(Value* value, Arguments... arguments)
-    {
-        if (!canBeInternal(value))
-            return false;
-        
-        return acceptInternals(arguments...);
-    }
-    bool acceptInternals() { return true; }
-    
-    template<typename... Arguments>
-    void acceptInternalsLate(Value* value, Arguments... arguments)
-    {
-        commitInternal(value);
-        acceptInternalsLate(arguments...);
-    }
-    void acceptInternalsLate() { }
-    
-    template<typename... Arguments>
-    bool acceptOperands(Value* value, Arguments... arguments)
-    {
-        if (locked.contains(value))
-            return false;
-        return acceptOperands(arguments...);
-    }
-    bool acceptOperands() { return true; }
-    
-    template<typename... Arguments>
-    void acceptOperandsLate(Arguments...) { }
-    
-    bool tryLoad(Value* address)
-    {
-        append(
-            moveForType(currentValue->type()),
-            effectiveAddr(address, currentValue), tmp(currentValue));
-        return true;
-    }
+        switch (m_value->opcode()) {
+        case Load:{
+            append(
+                moveForType(m_value->type()),
+                addr(m_value), tmp(m_value));
+            return;
+        }
+            
+        case Load8S: {
+            append(Load8SignedExtendTo32, addr(m_value), tmp(m_value));
+            return;
+        }
 
-    bool tryLoad8S(Value* address)
-    {
-        append(Load8SignedExtendTo32, effectiveAddr(address, currentValue), tmp(currentValue));
-        return true;
-    }
-    
-    bool tryLoad8Z(Value* address)
-    {
-        append(Load8, effectiveAddr(address, currentValue), tmp(currentValue));
-        return true;
-    }
-    
-    bool tryLoad16S(Value* address)
-    {
-        append(Load16SignedExtendTo32, effectiveAddr(address, currentValue), tmp(currentValue));
-        return true;
-    }
-    
-    bool tryLoad16Z(Value* address)
-    {
-        append(Load16, effectiveAddr(address, currentValue), tmp(currentValue));
-        return true;
-    }
-    
-    bool tryAdd(Value* left, Value* right)
-    {
-        switch (left->type()) {
-        case Int32:
-            appendBinOp<Add32, Commutative>(left, right);
-            return true;
-        case Int64:
-            appendBinOp<Add64, Commutative>(left, right);
-            return true;
-        default:
-            // FIXME: Implement more types!
-            return false;
+        case Load8Z: {
+            append(Load8, addr(m_value), tmp(m_value));
+            return;
         }
-    }
 
-    bool trySub(Value* left, Value* right)
-    {
-        switch (left->type()) {
-        case Int32:
-            if (left->isInt32(0))
-                appendUnOp<Neg32>(right);
-            else
-                appendBinOp<Sub32>(left, right);
-            return true;
-        case Int64:
-            if (left->isInt64(0))
-                appendUnOp<Neg64>(right);
-            else
-                appendBinOp<Sub64>(left, right);
-            return true;
-        default:
-            // FIXME: Implement more types!
-            return false;
+        case Load16S: {
+            append(Load16SignedExtendTo32, addr(m_value), tmp(m_value));
+            return;
         }
-    }
-    
-    bool tryAnd(Value* left, Value* right)
-    {
-        switch (left->type()) {
-        case Int32:
-            appendBinOp<And32, Commutative>(left, right);
-            return true;
-        case Int64:
-            appendBinOp<And64, Commutative>(left, right);
-            return true;
-        default:
-            // FIXME: Implement more types!
-            return false;
+
+        case Load16Z: {
+            append(Load16, addr(m_value), tmp(m_value));
+            return;
         }
-    }
 
-    bool tryOr(Value* left, Value* right)
-    {
-        switch (left->type()) {
-        case Int32:
-            appendBinOp<Or32, Commutative>(left, right);
-            return true;
-        case Int64:
-            appendBinOp<Or64, Commutative>(left, right);
-            return true;
-        default:
-            // FIXME: Implement more types!
-            return false;
+        case Add: {
+            // FIXME: Need a story for doubles.
+            // https://bugs.webkit.org/show_bug.cgi?id=150991
+            appendBinOp<Add32, Add64, Air::Oops, Commutative>(
+                m_value->child(0), m_value->child(1));
+            return;
         }
-    }
 
-    bool tryXor(Value* left, Value* right)
-    {
-        switch (left->type()) {
-        case Int32:
-            appendBinOp<Xor32, Commutative>(left, right);
-            return true;
-        case Int64:
-            appendBinOp<Xor64, Commutative>(left, right);
-            return true;
-        default:
-            // FIXME: Implement more types!
-            return false;
+        case Sub: {
+            if (m_value->child(0)->isInt(0))
+                appendUnOp<Neg32, Neg64, Air::Oops>(m_value->child(1));
+            else
+                appendBinOp<Sub32, Sub64, Air::Oops>(m_value->child(0), m_value->child(1));
+            return;
         }
-    }
 
-    void appendShift(Air::Opcode opcode, Value* value, Value* amount)
-    {
-        if (imm(amount)) {
-            append(Move, tmp(value), tmp(currentValue));
-            append(opcode, imm(amount), tmp(currentValue));
+        case BitAnd: {
+            appendBinOp<And32, And64, Air::Oops, Commutative>(
+                m_value->child(0), m_value->child(1));
             return;
         }
 
-        append(Move, tmp(value), tmp(currentValue));
-        append(Move, tmp(amount), Tmp(X86Registers::ecx));
-        append(opcode, Tmp(X86Registers::ecx), tmp(currentValue));
-    }
+        case BitOr: {
+            appendBinOp<Or32, Or64, Air::Oops, Commutative>(
+                m_value->child(0), m_value->child(1));
+            return;
+        }
 
-    bool tryShl(Value* value, Value* amount)
-    {
-        Air::Opcode opcode = value->type() == Int32 ? Lshift32 : Lshift64;
-        appendShift(opcode, value, amount);
-        return true;
-    }
+        case BitXor: {
+            appendBinOp<Xor32, Xor64, Air::Oops, Commutative>(
+                m_value->child(0), m_value->child(1));
+            return;
+        }
 
-    bool trySShr(Value* value, Value* amount)
-    {
-        Air::Opcode opcode = value->type() == Int32 ? Rshift32 : Rshift64;
-        appendShift(opcode, value, amount);
-        return true;
-    }
-
-    bool tryZShr(Value* value, Value* amount)
-    {
-        Air::Opcode opcode = value->type() == Int32 ? Urshift32 : Urshift64;
-        appendShift(opcode, value, amount);
-        return true;
-    }
-    
-    bool tryStoreAddLoad(Value* left, Value* right, Value*)
-    {
-        switch (left->type()) {
-        case Int32:
-            return tryAppendStoreBinOp<Add32, Commutative>(left, right);
-        default:
-            // FIXME: Implement more types!
-            return false;
+        case Shl: {
+            appendShift<Lshift32, Lshift64>(m_value->child(0), m_value->child(1));
+            return;
         }
-    }
 
-    bool tryStoreSubLoad(Value* left, Value* right, Value*)
-    {
-        switch (left->type()) {
-        case Int32:
-            if (left->isInt32(0))
-                return tryAppendStoreUnOp<Neg32>(right);
-            return tryAppendStoreBinOp<Sub32, NotCommutative>(left, right);
-        default:
-            // FIXME: Implement more types!
-            return false;
+        case SShr: {
+            appendShift<Rshift32, Rshift64>(m_value->child(0), m_value->child(1));
+            return;
         }
-    }
 
-    bool tryStoreAndLoad(Value* left, Value* right, Value*)
-    {
-        switch (left->type()) {
-        case Int32:
-            return tryAppendStoreBinOp<And32, Commutative>(left, right);
-        default:
-            // FIXME: Implement more types!
-            return false;
+        case ZShr: {
+            appendShift<Urshift32, Urshift64>(m_value->child(0), m_value->child(1));
+            return;
         }
-    }
-    
-    bool tryStore(Value* value, Value* address)
-    {
-        appendStore(value, effectiveAddr(address, currentValue));
-        return true;
-    }
 
-    bool tryTrunc(Value* value)
-    {
-        ASSERT_UNUSED(value, tmp(value) == tmp(currentValue));
-        return true;
-    }
+        case Store: {
+            Value* valueToStore = m_value->child(0);
+            if (canBeInternal(valueToStore)) {
+                bool matched = false;
+                switch (valueToStore->opcode()) {
+                case Add:
+                    matched = tryAppendStoreBinOp<Add32, Add64, Commutative>(
+                        valueToStore->child(0), valueToStore->child(1));
+                    break;
+                case Sub:
+                    if (valueToStore->child(0)->isInt(0)) {
+                        matched = tryAppendStoreUnOp<Neg32, Neg64>(valueToStore->child(1));
+                        break;
+                    }
+                    matched = tryAppendStoreBinOp<Sub32, Sub64>(
+                        valueToStore->child(0), valueToStore->child(1));
+                    break;
+                case BitAnd:
+                    matched = tryAppendStoreBinOp<And32, And64, Commutative>(
+                        valueToStore->child(0), valueToStore->child(1));
+                    break;
+                default:
+                    break;
+                }
+                if (matched) {
+                    commitInternal(valueToStore);
+                    return;
+                }
+            }
 
-    bool tryZExt32(Value* value)
-    {
-        if (highBitsAreZero(value)) {
-            ASSERT(tmp(value) == tmp(currentValue));
-            return true;
+            appendStore(valueToStore, addr(m_value));
+            return;
         }
-        
-        append(Move32, tmp(value), tmp(currentValue));
-        return true;
-    }
 
-    bool tryArgumentReg()
-    {
-        prologue.append(Inst(
-            moveForType(currentValue->type()), currentValue,
-            Tmp(currentValue->as<ArgumentRegValue>()->argumentReg()),
-            tmp(currentValue)));
-        return true;
-    }
-    
-    bool tryConst32()
-    {
-        append(Move, imm(currentValue), tmp(currentValue));
-        return true;
-    }
-    
-    bool tryConst64()
-    {
-        if (imm(currentValue)) {
-            append(Move, imm(currentValue), tmp(currentValue));
-            return true;
+        case Trunc: {
+            ASSERT(tmp(m_value->child(0)) == tmp(m_value));
+            return;
         }
-        append(Move, Arg::imm64(currentValue->asInt64()), tmp(currentValue));
-        return true;
-    }
 
-    bool tryFramePointer()
-    {
-        append(Move, Tmp(GPRInfo::callFrameRegister), tmp(currentValue));
-        return true;
-    }
+        case ZExt32: {
+            if (highBitsAreZero(m_value->child(0))) {
+                ASSERT(tmp(m_value->child(0)) == tmp(m_value));
+                return;
+            }
 
-    bool tryStackSlot()
-    {
-        append(
-            Lea,
-            Arg::stack(stackToStack.get(currentValue->as<StackSlotValue>())),
-            tmp(currentValue));
-        return true;
-    }
+            append(Move32, tmp(m_value->child(0)), tmp(m_value));
+            return;
+        }
 
-    bool tryEqual()
-    {
-        insts.last().append(createCompare(currentValue));
-        return true;
-    }
+        case ArgumentReg: {
+            m_prologue.append(Inst(
+                moveForType(m_value->type()), m_value,
+                Tmp(m_value->as<ArgumentRegValue>()->argumentReg()),
+                tmp(m_value)));
+            return;
+        }
 
-    bool tryNotEqual()
-    {
-        insts.last().append(createCompare(currentValue));
-        return true;
-    }
+        case Const32:
+        case Const64: {
+            append(Move, immForMove(m_value), tmp(m_value));
+            return;
+        }
 
-    bool tryLessThan()
-    {
-        insts.last().append(createCompare(currentValue));
-        return true;
-    }
+        case FramePointer: {
+            append(Move, Tmp(GPRInfo::callFrameRegister), tmp(m_value));
+            return;
+        }
 
-    bool tryGreaterThan()
-    {
-        insts.last().append(createCompare(currentValue));
-        return true;
-    }
+        case B3::StackSlot: {
+            append(
+                Lea,
+                Arg::stack(m_stackToStack.get(m_value->as<StackSlotValue>())),
+                tmp(m_value));
+            return;
+        }
 
-    bool tryLessEqual()
-    {
-        insts.last().append(createCompare(currentValue));
-        return true;
-    }
+        case Equal:
+        case NotEqual:
+        case LessThan:
+        case GreaterThan:
+        case LessEqual:
+        case GreaterEqual:
+        case Above:
+        case Below:
+        case AboveEqual:
+        case BelowEqual: {
+            m_insts.last().append(createCompare(m_value));
+            return;
+        }
 
-    bool tryGreaterEqual()
-    {
-        insts.last().append(createCompare(currentValue));
-        return true;
-    }
+        case Patchpoint: {
+            PatchpointValue* patchpointValue = m_value->as<PatchpointValue>();
+            ensureSpecial(m_patchpointSpecial);
+            
+            Inst inst(Patch, patchpointValue, Arg::special(m_patchpointSpecial));
+            
+            if (patchpointValue->type() != Void)
+                inst.args.append(tmp(patchpointValue));
+            
+            fillStackmap(inst, patchpointValue, 0);
+            
+            m_insts.last().append(WTF::move(inst));
+            return;
+        }
 
-    bool tryAbove()
-    {
-        insts.last().append(createCompare(currentValue));
-        return true;
-    }
+        case Check: {
+            Inst branch = createBranch(m_value->child(0));
+            
+            CheckSpecial::Key key(branch);
+            auto result = m_checkSpecials.add(key, nullptr);
+            Special* special = ensureSpecial(result.iterator->value, key);
+            
+            CheckValue* checkValue = m_value->as<CheckValue>();
+            
+            Inst inst(Patch, checkValue, Arg::special(special));
+            inst.args.appendVector(branch.args);
+            
+            fillStackmap(inst, checkValue, 1);
+            
+            m_insts.last().append(WTF::move(inst));
+            return;
+        }
 
-    bool tryBelow()
-    {
-        insts.last().append(createCompare(currentValue));
-        return true;
-    }
+        case Upsilon: {
+            Value* value = m_value->child(0);
+            append(
+                relaxedMoveForType(value->type()), immOrTmpForMove(value),
+                tmp(m_value->as<UpsilonValue>()->phi()));
+            return;
+        }
 
-    bool tryAboveEqual()
-    {
-        insts.last().append(createCompare(currentValue));
-        return true;
-    }
+        case Phi: {
+            // Our semantics are determined by Upsilons, so we have nothing to do here.
+            return;
+        }
 
-    bool tryBelowEqual()
-    {
-        insts.last().append(createCompare(currentValue));
-        return true;
-    }
+        case Branch: {
+            m_insts.last().append(createBranch(m_value->child(0)));
+            return;
+        }
 
-    PatchpointSpecial* patchpointSpecial { nullptr };
-    bool tryPatchpoint()
-    {
-        PatchpointValue* patchpointValue = currentValue->as<PatchpointValue>();
-        ensureSpecial(patchpointSpecial);
+        case B3::Jump: {
+            append(Air::Jump);
+            return;
+        }
+            
+        case Identity: {
+            ASSERT(tmp(m_value->child(0)) == tmp(m_value));
+            return;
+        }
 
-        Inst inst(Patch, patchpointValue, Arg::special(patchpointSpecial));
+        case Return: {
+            Value* value = m_value->child(0);
+            append(
+                relaxedMoveForType(value->type()), immOrTmpForMove(value),
+                Tmp(GPRInfo::returnValueGPR));
+            append(Ret);
+            return;
+        }
 
-        if (patchpointValue->type() != Void)
-            inst.args.append(tmp(patchpointValue));
+        default:
+            break;
+        }
 
-        fillStackmap(inst, patchpointValue, 0);
-        
-        insts.last().append(WTF::move(inst));
-        return true;
+        dataLog("FATAL: could not lower ", deepDump(m_value), "\n");
+        RELEASE_ASSERT_NOT_REACHED();
     }
+    
+    IndexSet<Value> m_locked; // These are values that will have no Tmp in Air.
+    IndexMap<Value, Tmp> m_valueToTmp; // These are values that must have a Tmp in Air. We say that a Value* with a non-null Tmp is "pinned".
+    IndexMap<B3::BasicBlock, Air::BasicBlock*> m_blockToBlock;
+    HashMap<StackSlotValue*, Air::StackSlot*> m_stackToStack;
 
-    HashMap<CheckSpecial::Key, CheckSpecial*> checkSpecials;
-    bool tryCheck(Value* value)
-    {
-        Inst branch = createBranch(value);
+    UseCounts m_useCounts;
 
-        CheckSpecial::Key key(branch);
-        auto result = checkSpecials.add(key, nullptr);
-        Special* special = ensureSpecial(result.iterator->value, key);
+    Vector<Vector<Inst, 4>> m_insts;
+    Vector<Inst> m_prologue;
 
-        CheckValue* checkValue = currentValue->as<CheckValue>();
+    B3::BasicBlock* m_block;
+    unsigned m_index;
+    Value* m_value;
 
-        Inst inst(Patch, checkValue, Arg::special(special));
-        inst.args.appendVector(branch.args);
+    PatchpointSpecial* m_patchpointSpecial { nullptr };
+    HashMap<CheckSpecial::Key, CheckSpecial*> m_checkSpecials;
 
-        fillStackmap(inst, checkValue, 1);
-
-        insts.last().append(WTF::move(inst));
-        return true;
-    }
-
-    bool tryUpsilon(Value* value)
-    {
-        append(
-            relaxedMoveForType(value->type()),
-            immOrTmp(value),
-            tmp(currentValue->as<UpsilonValue>()->phi()));
-        return true;
-    }
-
-    bool tryPhi()
-    {
-        // Our semantics are determined by Upsilons, so we have nothing to do here.
-        return true;
-    }
-
-    bool tryBranch(Value* value)
-    {
-        insts.last().append(createBranch(value));
-        return true;
-    }
-
-    bool tryJump()
-    {
-        append(Air::Jump);
-        return true;
-    }
-
-    bool tryIdentity(Value* value)
-    {
-        ASSERT_UNUSED(value, tmp(value) == tmp(currentValue));
-        return true;
-    }
-    
-    bool tryReturn(Value* value)
-    {
-        append(relaxedMoveForType(value->type()), immOrTmp(value), Tmp(GPRInfo::returnValueGPR));
-        append(Ret);
-        return true;
-    }
-    
-    Procedure& procedure;
-    Code& code;
+    Procedure& m_procedure;
+    Code& m_code;
 };
 
 } // anonymous namespace

Deleted: trunk/Source/_javascript_Core/b3/B3LoweringMatcher.patterns (192134 => 192135)


--- trunk/Source/_javascript_Core/b3/B3LoweringMatcher.patterns	2015-11-08 01:29:22 UTC (rev 192134)
+++ trunk/Source/_javascript_Core/b3/B3LoweringMatcher.patterns	2015-11-08 02:10:57 UTC (rev 192135)
@@ -1,91 +0,0 @@
-# Copyright (C) 2015 Apple Inc. All rights reserved.
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions
-# are met:
-# 1. Redistributions of source code must retain the above copyright
-#    notice, this list of conditions and the following disclaimer.
-# 2. Redistributions in binary form must reproduce the above copyright
-#    notice, this list of conditions and the following disclaimer in the
-#    documentation and/or other materials provided with the distribution.
-#
-# THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
-# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
-# THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
-# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
-# BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
-# THE POSSIBILITY OF SUCH DAMAGE.
-
-matcher LoweringMatcher
-
-# These rules are considered in a cascade, starting at the top. It's implenented using switch
-# statements wherever possible.
-
-# Note: adding more rules to this matcher is an excellent way of improving performance!
-
-Load = Load(address)
-
-Load8S = Load8S(address)
-Load8Z = Load8Z(address)
-Load16S = Load16S(address)
-Load16Z = Load16Z(address)
-
-Add = Add(left, right)
-Sub = Sub(left, right)
-And = BitAnd(left, right)
-Or = BitOr(left, right)
-Xor = BitXor(left, right)
-
-Shl = Shl(value, amount)
-SShr = SShr(value, amount)
-ZShr = ZShr(value, amount)
-
-StoreAddLoad = Store(Add(left, right), address)
-StoreSubLoad = Store(Sub(left, right), address)
-StoreAndLoad = Store(BitAnd(left, right), address)
-Store = Store(value, address)
-
-Trunc = Trunc(value)
-
-ZExt32 = ZExt32(value)
-
-ArgumentReg = ArgumentReg()
-
-Const32 = Const32()
-Const64 = Const64()
-
-StackSlot = StackSlot()
-FramePointer = FramePointer()
-
-# These patterns delegate to the comparison matcher.
-Equal = Equal()
-NotEqual = NotEqual()
-LessThan = LessThan()
-GreaterThan = GreaterThan()
-LessEqual = LessEqual()
-GreaterEqual = GreaterEqual()
-Above = Above()
-Below = Below()
-AboveEqual = AboveEqual()
-BelowEqual = BelowEqual()
-
-Patchpoint = Patchpoint()
-Check = Check(value)
-
-Upsilon = Upsilon(value)
-Phi = Phi()
-
-Branch = Branch(value)
-Jump = Jump()
-
-# It would be fantastic to not have to have this here, but the philosophy of our lowering is that
-# it should be sound even if we haven't run optimizations.
-Identity = Identity(value)
-
-Return = Return(value)
-

Deleted: trunk/Source/_javascript_Core/b3/generate_pattern_matcher.rb (192134 => 192135)


--- trunk/Source/_javascript_Core/b3/generate_pattern_matcher.rb	2015-11-08 01:29:22 UTC (rev 192134)
+++ trunk/Source/_javascript_Core/b3/generate_pattern_matcher.rb	2015-11-08 02:10:57 UTC (rev 192135)
@@ -1,550 +0,0 @@
-#!/usr/bin/env ruby
-
-# Copyright (C) 2015 Apple Inc. All rights reserved.
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions
-# are met:
-# 1. Redistributions of source code must retain the above copyright
-#    notice, this list of conditions and the following disclaimer.
-# 2. Redistributions in binary form must reproduce the above copyright
-#    notice, this list of conditions and the following disclaimer in the
-#    documentation and/or other materials provided with the distribution.
-#
-# THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
-# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
-# THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
-# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
-# BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
-# THE POSSIBILITY OF SUCH DAMAGE.
-
-require "pathname"
-
-$varIndex = 0
-def newVar
-    $varIndex += 1
-    "_tmp#{$varIndex}"
-end
-
-class Production
-    attr_reader :origin, :name, :varName, :anonymous, :opcode, :children
-
-    def initialize(origin, name, opcode, children)
-        @origin = origin
-        @name = name
-        @anonymous = (name == nil or name =~ /\A\$([0-9]+)\Z/)
-        @varName = @anonymous ? newVar : name =~ /\A\$/ ? $' : name
-        @opcode = opcode
-        @children = children ? children : []
-    end
-
-    def hasOpcode
-        opcode != nil
-    end
-
-    def eachVariable(&proc)
-        children.each_with_index {
-            | child, index |
-            yield varName,
-                  child.name,
-                  child.varName,
-                  child.anonymous ? (child.opcode ? :anonInternal : :anonOperand) : (child.opcode ? :internal : :operand),
-                  index
-            child.eachVariable(&proc)
-        }
-    end
-end
-
-class Matcher
-    attr_reader :name, :productions
-
-    def initialize(name, productions)
-        @name = name
-        @productions = productions
-    end
-end
-
-class Origin
-    attr_reader :fileName, :lineNumber
-    
-    def initialize(fileName, lineNumber)
-        @fileName = fileName
-        @lineNumber = lineNumber
-    end
-    
-    def to_s
-        "#{fileName}:#{lineNumber}"
-    end
-end
-
-class Token
-    attr_reader :origin, :string
-    
-    def initialize(origin, string)
-        @origin = origin
-        @string = string
-    end
-    
-    def ==(other)
-        if other.is_a? Token
-            @string == other.string
-        else
-            @string == other
-        end
-    end
-    
-    def =~(other)
-        @string =~ other
-    end
-    
-    def to_s
-        "#{@string.inspect} at #{origin}"
-    end
-    
-    def parseError(*comment)
-        if comment.empty?
-            raise "Parse error: #{to_s}"
-        else
-            raise "Parse error: #{to_s}: #{comment[0]}"
-        end
-    end
-end
-
-def lex(str, fileName)
-    fileName = Pathname.new(fileName)
-    result = []
-    lineNumber = 1
-    while not str.empty?
-        case str
-        when /\A\#([^\n]*)/
-            # comment, ignore
-        when /\A\n/
-            # newline, ignore
-            lineNumber += 1
-        when /\A[a-zA-Z0-9$]([a-zA-Z0-9_]*)/
-            result << Token.new(Origin.new(fileName, lineNumber), $&)
-        when /\A([ \t\r]+)/
-            # whitespace, ignore
-        when /\A[=(),]/
-            result << Token.new(Origin.new(fileName, lineNumber), $&)
-        else
-            raise "Lexer error at #{Origin.new(fileName, lineNumber).to_s}, unexpected sequence #{str[0..20].inspect}"
-        end
-        str = $~.post_match
-    end
-    result
-end
-
-def isKeyword(token)
-    # We can extend this as we add more keywords. Like, we might want "include" eventually.
-    token =~ /\A((matcher)|(include))\Z/
-end
-
-def isIdentifier(token)
-    token =~ /\A[a-zA-Z0-9$]([a-zA-Z0-9_]*)\Z/ and not isKeyword(token)
-end
-
-class Parser
-    def initialize(data, fileName)
-        @tokens = lex(data, fileName)
-        @idx = 0
-    end
-
-    def token
-        @tokens[@idx]
-    end
-
-    def advance
-        @idx += 1
-    end
-
-    def parseError(*comment)
-        if token
-            token.parseError(*comment)
-        else
-            if comment.empty?
-                raise "Parse error at end of file"
-            else
-                raise "Parse error at end of file: #{comment[0]}"
-            end
-        end
-    end
-
-    def consume(string)
-        parseError("Expected #{string}") unless token == string
-        advance
-    end
-
-    def consumeIdentifier
-        result = token.string
-        parseError("Expected identifier") unless isIdentifier(result)
-        advance
-        result
-    end
-
-    def addUnique(name, productions, production)
-        if name == production.varName
-            parseError("Child production has same name as parent production")
-        end
-        if productions.detect { | entry | entry.name == production.name }
-            parseError("Duplicate production")
-        end
-        productions << production
-    end
-
-    def parse
-        consume("matcher")
-
-        matcherName = consumeIdentifier
-
-        productions = []
-
-        loop {
-            break if @idx >= @tokens.length
-
-            # We might want to support "include". If we did that, it would go here.
-            production = parseProduction
-            
-            if productions.detect { | entry | entry.name == production.name }
-                parseError("Duplicate production")
-            end
-
-            productions << production
-        }
-
-        Matcher.new(matcherName, productions)
-    end
-
-    def parseProduction
-        origin = token.origin
-        name = consumeIdentifier
-
-        if token == "="
-            consume("=")
-            opcode = consumeIdentifier
-            consume("(")
-        elsif token == "("
-            consume("(")
-            opcode = name
-            name = nil
-        else
-            return Production.new(origin, name, nil, nil)
-        end
-        
-        productions = []
-        loop {
-            break if token == ")"
-
-            addUnique(name, productions, parseProduction)
-
-            break if token == ")"
-            consume(",")
-        }
-
-        # Advance after the ")"
-        advance
-
-        Production.new(origin, name, opcode, productions)
-    end
-end
-
-fileName = ARGV[0]
-
-parser = Parser.new(IO::read(fileName), fileName)
-matcher = parser.parse
-
-class SubMatch
-    attr_reader :indexMap, :productions
-    
-    def initialize(indexList, productions)
-        @indexMap = []
-        @productions = []
-        indexList.each {
-            | index |
-            @indexMap << index
-            @productions << productions[index]
-        }
-    end
-
-    def mapIndexList(indexList)
-        indexList.map {
-            | index |
-            @indexMap[index]
-        }
-    end
-end
-
-$stderr.puts "Generating code for #{fileName}."
-
-File.open("B3" + matcher.name + ".h", "w") {
-    | outp |
-
-    outp.puts "// Generated by generate_pattern_matcher.rb from #{fileName} -- do not edit!"
-
-    outp.puts "#ifndef B3#{matcher.name}_h"
-    outp.puts "#define B3#{matcher.name}_h"
-    
-    outp.puts "#include \"B3Value.h\""
-    
-    outp.puts "namespace JSC { namespace B3 {"
-
-    outp.puts "template<typename Adaptor>"
-    outp.puts "bool run#{matcher.name}(Value* _value, Adaptor& adaptor)"
-    outp.puts "{"
-
-    # Note that this does not emit properly indented code, because it's a recursive emitter. If you
-    # want to see the code nicely formatted, open it in a good text editor and ask it to format it
-    # for you.
-
-    # In odd situations, we may not use the input value. Tell the compiler to chill out about it.
-    outp.puts "UNUSED_PARAM(_value);"
-        
-    # This just protects the caller from having to worry about productions having different lengths.
-    def matchLength(outp, valueName, productions)
-        indexLists = []
-        productions.each_with_index {
-            | production, index |
-            if indexLists[-1] and productions[indexLists[-1][0]].children.length == production.children.length
-                indexLists[-1] << index
-            else
-                indexLists << [index]
-            end
-        }
-
-        indexLists.each {
-            | indexList |
-            length = productions[indexList[0]].children.length
-            if length > 0
-                outp.puts "if (#{valueName}->children().size() >= #{length}) {"
-                yield indexList
-                outp.puts "}"
-            else
-                yield indexList
-            end
-        }
-    end
-
-    # This efficiently selects from the given set of productions. It assumes that productions
-    # are not duplicated. It yields the index of the found production, and the coroutine is
-    # responsible for emitting specific code for that production. The coroutine is given lists of
-    # indices of possibly-matching productions.
-    def matchProductions(outp, valueName, productions)
-        # First, split the productions list into runs of productions with an opcode and productions
-        # without one.
-        indexLists = []
-        productions.each_with_index {
-            | production, index |
-            if indexLists[-1] and productions[indexLists[-1][0]].hasOpcode == production.hasOpcode
-                indexLists[-1] << index
-            else
-                indexLists << [index]
-            end
-        }
-        
-        # Now, emit pattern matching code. We do it differently for lists with an opcode than for
-        # lists without one.
-        indexLists.each {
-            | indexList |
-            if productions[indexList[0]].hasOpcode
-                # Now, we split this index list into groups for each opcode.
-                groups = {}
-                indexList.each {
-                    | index |
-                    production = productions[index]
-                    if groups[production.opcode]
-                        groups[production.opcode] << index
-                    else
-                        groups[production.opcode] = [index]
-                    end
-                }
-
-                # Emit a switch statement.
-                outp.puts "switch (#{valueName}->opcode()) {"
-                groups.each_pair {
-                    | opcode, subIndexList |
-                    outp.puts "case #{opcode}:"
-                    subMatch = SubMatch.new(subIndexList, productions)
-                    matchLength(outp, valueName, subMatch.productions) {
-                        | indexList |
-                        yield subMatch.mapIndexList(indexList)
-                    }
-                    outp.puts "break;"
-                }
-                outp.puts "default:"
-                outp.puts "break;"
-                outp.puts "}"
-            else
-                subMatch = SubMatch.new(indexList, productions)
-                matchLength(outp, valueName, subMatch.productions) {
-                    | indexList |
-                    yield subMatch.mapIndexList(indexList)
-                }
-            end
-        }
-    end
-
-    # Takes productions that have the same opcode and the same length and selects from them based on
-    # the productions at the given column.
-    def matchColumn(outp, valueName, productions, columnIndex)
-        if columnIndex >= productions[0].children.length
-            yield (0...(productions.size)).to_a
-            return
-        end
-
-        subValue = newVar
-        
-        outp.puts "{"
-        outp.puts "Value* #{subValue} = #{valueName}->child(#{columnIndex});"
-
-        # We may not use this value. Tell the compiler to chill out about it.
-        outp.puts "UNUSED_PARAM(#{subValue});"
-        
-        subProductions = productions.map {
-            | production |
-            production.children[columnIndex]
-        }
-
-        matchRecursively(outp, subValue, subProductions) {
-            | indexList |
-            subMatch = SubMatch.new(indexList, productions)
-            matchColumn(outp, valueName, subMatch.productions, columnIndex + 1) {
-                | indexList |
-                yield subMatch.mapIndexList(indexList)
-            }
-        }
-
-        outp.puts "}"
-    end
-
-    def matchRecursively(outp, valueName, productions)
-        matchProductions(outp, valueName, productions) {
-            | indexList |
-            # We are guaranteed that this index list contains productions with the same opcode and
-            # the same length.
-            subMatch = SubMatch.new(indexList, productions)
-            matchColumn(outp, valueName, subMatch.productions, 0) {
-                | indexList |
-                yield subMatch.mapIndexList(indexList)
-            }
-        }
-    end
-
-    matchRecursively(outp, "_value", matcher.productions) {
-        | indexList |
-        indexList.each {
-            | index |
-            production = matcher.productions[index]
-            outp.puts "{"
-            outp.puts "Value* #{production.varName} = _value;"
-            outp.puts "UNUSED_PARAM(#{production.varName});"
-            internalArguments = []
-            operandArguments = []
-            tryArguments = []
-            numScopes = 0
-            seen = {}
-
-            # FIXME: We want to be able to combine pattern matchers, like have the pattern match rule
-            # for Branch in the lowering patcher delegate to a matcher that can deduce the kind of
-            # comparison that we're doing. We could then reuse that latter matcher for Check and
-            # unfused compare. The key to making such a delegation work is to have the inner matcher
-            # (the comparison matcher in this case) keep cascading if it encounters a rule that
-            # produces something that the outer matcher cannot handle. It's not obvious that the
-            # comparison matcher would need this, but the address matcher probably will. For example,
-            # we may have a StoreAddLoad rule that delegates to the address matcher, and the address
-            # matcher may produce an address that the lowering matcher cannot use because while the
-            # Add form does accept addresses it may not accept the particular address that the
-            # address matcher produced. In that case, instead of giving up on the StoreAddLoad rule,
-            # we want the address matcher to just try a different address. The comparison matcher
-            # might want this just for better controlling when to pin variables. Currently, if it
-            # constructed some code, it would also pin the variables that it used. If the lowering
-            # matcher then decided to reject the code created by the comparison matcher, it would
-            # have a hard time "unpinning" those variables. But if we had some kind of delegation, we
-            # could have the pinning of the comparison matcher happen only if the lowering matcher
-            # accepted.
-            #
-            # This could all be accomplished by having tryBlah() return something, and have the
-            # matcher also take a functor argument that accepts what tryBlah() returns. So instead of
-            #
-            # if (tryBlah()) ok;
-            #
-            # We would have:
-            #
-            # if (functor(tryBlah()) ok;
-            #
-            # When lowering decided to delegate to the address or comparison matcher, it could supply
-            # a functor that decides whether the thing that the sub-matcher selected is indeed
-            # useful.
-            #
-            # In the near term, we probably won't need this. But we will definitely need it as we
-            # expand to efficiently support more platforms. On X86_64, any branching instruction is
-            # usable in any context, and any address should be usable in any context (and the only
-            # cases where it wouldn't be is if we have holes in the MacroAssembler).
-            #
-            # https://bugs.webkit.org/show_bug.cgi?id=150559
-            
-            production.eachVariable {
-                | parentVarName, childName, childVarName, kind, index |
-                loadExpr = "#{parentVarName}->child(#{index})"
-                
-                if seen[childVarName]
-                    tmp = newVar
-                    outp.puts "Value* #{tmp} = #{loadExpr};"
-                    outp.puts "if (#{tmp} == #{childVarName}) {"
-                    numScopes += 1
-                else
-                    seen[childVarName] = true
-                    
-                    outp.puts "Value* #{childVarName} = #{loadExpr};"
-
-                    # FIXME: Consider getting rid of the $ prefix.
-                    # https://bugs.webkit.org/show_bug.cgi?id=150527
-                    if childName =~ /\A\$/
-                        if childName =~ /\A\$([0-9]+)\Z/
-                            outp.puts "if (#{childVarName}->isInt(#{$1})) {"
-                        else
-                            outp.puts "if (#{childVarName}->hasInt()) {"
-                        end
-                        numScopes += 1
-                    end
-                    
-                    internalArguments << childVarName if kind == :internal or kind == :anonInternal
-                    operandArguments << childVarName if kind == :operand or kind == :anonOperand
-                    tryArguments << childVarName if kind == :internal or kind == :operand
-                end
-            }
-            outp.puts "if (adaptor.acceptRoot(_value)"
-            unless internalArguments.empty?
-                outp.puts("&& adaptor.acceptInternals(" + internalArguments.join(", ") + ")")
-            end
-            unless operandArguments.empty?
-                outp.puts("&& adaptor.acceptOperands(" + operandArguments.join(", ") + ")")
-            end
-            outp.puts("&& adaptor.try#{production.name}(" + tryArguments.join(", ") + ")) {")
-            outp.puts "adaptor.acceptRootLate(_value);"
-            unless internalArguments.empty?
-                outp.puts "adaptor.acceptInternalsLate(" + internalArguments.join(", ") + ");"
-            end
-            unless operandArguments.empty?
-                outp.puts "adaptor.acceptOperandsLate(" + operandArguments.join(", ") + ");"
-            end
-            outp.puts "return true;"
-            outp.puts "}"
-            numScopes.times {
-                outp.puts "}"
-            }
-            outp.puts "}"
-        }
-    }
-    
-    outp.puts "    return false;"
-    outp.puts "}"
-    
-    outp.puts "} } // namespace JSC::B3"
-
-    outp.puts "#endif // B3#{matcher.name}_h"
-}
_______________________________________________
webkit-changes mailing list
webkit-changes@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-changes

Reply via email to