Title: [242442] releases/WebKitGTK/webkit-2.24/Source/_javascript_Core
Revision
242442
Author
carlo...@webkit.org
Date
2019-03-05 04:40:38 -0800 (Tue, 05 Mar 2019)

Log Message

Merge r241964 - B3ReduceStrength: missing peephole optimizations for binary operations
https://bugs.webkit.org/show_bug.cgi?id=194252

Reviewed by Saam Barati.

Adds several sets of optimizations for BitAnd, BitOr and BitXor.
Using BitAnd distributivity over BitOr and BitXor:
  Turn any of these (for Op == BitOr || Op == BitXor):
        Op(BitAnd(x1, x2), BitAnd(x1, x3))
        Op(BitAnd(x2, x1), BitAnd(x1, x3))
        Op(BitAnd(x1, x2), BitAnd(x3, x1))
        Op(BitAnd(x2, x1), BitAnd(x3, x1))
   Into this: BitAnd(Op(x2, x3), x1)
   And any of these:
        Op(BitAnd(x1, x2), x1)
        Op(BitAnd(x2, x1), x1)
        Op(x1, BitAnd(x1, x2))
        Op(x1, BitAnd(x2, x1))
   Into this: BitAnd(Op(x2, x1), x1)
   This second set is equivalent to doing x1 => BitAnd(x1, x1), and then applying the first set.
Using de Morgan laws (we represent not as BitXor with allOnes):
  BitAnd(BitXor(x1, allOnes), BitXor(x2, allOnes)) => BitXor(BitOr(x1, x2), allOnes)
  BitOr(BitXor(x1, allOnes), BitXor(x2, allOnes) => BitXor(BitAnd(x1, x2), allOnes)
  BitOr(BitXor(x, allOnes), c) => BitXor(BitAnd(x, ~c), allOnes)
  BitAnd(BitXor(x, allOnes), c) => BitXor(BitOr(x, ~c), allOnes)
The latter two are equivalent to doing c => BitXor(~c, allOnes), and then applying the former two.

All of these transformations either reduce the number of operations (which we always do when possible), or bring the _expression_ closer to having:
  - BitXor with all ones at the outermost
  - then BitAnd
  - then other BitXor
  - then BitOr at the innermost.
These transformations that don't directly reduce the number of operations are still useful for normalization (helping things like CSE), and also can enable
more optimizations (for example BitXor with all ones can easily cancel each other once they are all at the outermost level).

* b3/B3ReduceStrength.cpp:
* b3/testb3.cpp:
(JSC::B3::testBitAndNotNot):
(JSC::B3::testBitAndNotImm):
(JSC::B3::testBitOrAndAndArgs):
(JSC::B3::testBitOrAndSameArgs):
(JSC::B3::testBitOrNotNot):
(JSC::B3::testBitOrNotImm):
(JSC::B3::testBitXorAndAndArgs):
(JSC::B3::testBitXorAndSameArgs):
(JSC::B3::run):

Modified Paths

Diff

Modified: releases/WebKitGTK/webkit-2.24/Source/_javascript_Core/ChangeLog (242441 => 242442)


--- releases/WebKitGTK/webkit-2.24/Source/_javascript_Core/ChangeLog	2019-03-05 12:40:35 UTC (rev 242441)
+++ releases/WebKitGTK/webkit-2.24/Source/_javascript_Core/ChangeLog	2019-03-05 12:40:38 UTC (rev 242442)
@@ -1,3 +1,52 @@
+2019-02-22  Robin Morisset  <rmoris...@apple.com>
+
+        B3ReduceStrength: missing peephole optimizations for binary operations
+        https://bugs.webkit.org/show_bug.cgi?id=194252
+
+        Reviewed by Saam Barati.
+
+        Adds several sets of optimizations for BitAnd, BitOr and BitXor.
+        Using BitAnd distributivity over BitOr and BitXor:
+          Turn any of these (for Op == BitOr || Op == BitXor):
+                Op(BitAnd(x1, x2), BitAnd(x1, x3))
+                Op(BitAnd(x2, x1), BitAnd(x1, x3))
+                Op(BitAnd(x1, x2), BitAnd(x3, x1))
+                Op(BitAnd(x2, x1), BitAnd(x3, x1))
+           Into this: BitAnd(Op(x2, x3), x1)
+           And any of these:
+                Op(BitAnd(x1, x2), x1)
+                Op(BitAnd(x2, x1), x1)
+                Op(x1, BitAnd(x1, x2))
+                Op(x1, BitAnd(x2, x1))
+           Into this: BitAnd(Op(x2, x1), x1)
+           This second set is equivalent to doing x1 => BitAnd(x1, x1), and then applying the first set.
+        Using de Morgan laws (we represent not as BitXor with allOnes):
+          BitAnd(BitXor(x1, allOnes), BitXor(x2, allOnes)) => BitXor(BitOr(x1, x2), allOnes)
+          BitOr(BitXor(x1, allOnes), BitXor(x2, allOnes) => BitXor(BitAnd(x1, x2), allOnes)
+          BitOr(BitXor(x, allOnes), c) => BitXor(BitAnd(x, ~c), allOnes)
+          BitAnd(BitXor(x, allOnes), c) => BitXor(BitOr(x, ~c), allOnes)
+        The latter two are equivalent to doing c => BitXor(~c, allOnes), and then applying the former two.
+
+        All of these transformations either reduce the number of operations (which we always do when possible), or bring the _expression_ closer to having:
+          - BitXor with all ones at the outermost
+          - then BitAnd
+          - then other BitXor
+          - then BitOr at the innermost.
+        These transformations that don't directly reduce the number of operations are still useful for normalization (helping things like CSE), and also can enable
+        more optimizations (for example BitXor with all ones can easily cancel each other once they are all at the outermost level).
+
+        * b3/B3ReduceStrength.cpp:
+        * b3/testb3.cpp:
+        (JSC::B3::testBitAndNotNot):
+        (JSC::B3::testBitAndNotImm):
+        (JSC::B3::testBitOrAndAndArgs):
+        (JSC::B3::testBitOrAndSameArgs):
+        (JSC::B3::testBitOrNotNot):
+        (JSC::B3::testBitOrNotImm):
+        (JSC::B3::testBitXorAndAndArgs):
+        (JSC::B3::testBitXorAndSameArgs):
+        (JSC::B3::run):
+
 2019-02-22  Yusuke Suzuki  <ysuz...@apple.com>
 
         Unreviewed, build fix after r241954

Modified: releases/WebKitGTK/webkit-2.24/Source/_javascript_Core/b3/B3ReduceStrength.cpp (242441 => 242442)


--- releases/WebKitGTK/webkit-2.24/Source/_javascript_Core/b3/B3ReduceStrength.cpp	2019-03-05 12:40:35 UTC (rev 242441)
+++ releases/WebKitGTK/webkit-2.24/Source/_javascript_Core/b3/B3ReduceStrength.cpp	2019-03-05 12:40:38 UTC (rev 242442)
@@ -962,8 +962,8 @@
 
             // Turn this: BitAnd(value, all-ones)
             // Into this: value.
-            if ((m_value->type() == Int64 && m_value->child(1)->isInt(0xffffffffffffffff))
-                || (m_value->type() == Int32 && m_value->child(1)->isInt(0xffffffff))) {
+            if ((m_value->type() == Int64 && m_value->child(1)->isInt(std::numeric_limits<uint64_t>::max()))
+                || (m_value->type() == Int32 && m_value->child(1)->isInt(std::numeric_limits<uint32_t>::max()))) {
                 replaceWithIdentity(m_value->child(0));
                 break;
             }
@@ -984,6 +984,7 @@
                 && !(m_value->child(1)->asInt32() & 0xffffff00)) {
                 m_value->child(0) = m_value->child(0)->child(0);
                 m_changed = true;
+                break;
             }
 
             // Turn this: BitAnd(SExt16(value), mask) where (mask & 0xffff0000) == 0
@@ -992,6 +993,7 @@
                 && !(m_value->child(1)->asInt32() & 0xffff0000)) {
                 m_value->child(0) = m_value->child(0)->child(0);
                 m_changed = true;
+                break;
             }
 
             // Turn this: BitAnd(SExt32(value), mask) where (mask & 0xffffffff00000000) == 0
@@ -1002,6 +1004,7 @@
                     m_index, ZExt32, m_value->origin(),
                     m_value->child(0)->child(0), m_value->child(0)->child(1));
                 m_changed = true;
+                break;
             }
 
             // Turn this: BitAnd(Op(value, constant1), constant2)
@@ -1023,7 +1026,40 @@
                 default:
                     break;
                 }
+                break;
             }
+
+            // Turn this: BitAnd(BitXor(x1, allOnes), BitXor(x2, allOnes)
+            // Into this: BitXor(BitOr(x1, x2), allOnes)
+            // By applying De Morgan laws
+            if (m_value->child(0)->opcode() == BitXor
+                && m_value->child(1)->opcode() == BitXor
+                && ((m_value->type() == Int64
+                        && m_value->child(0)->child(1)->isInt(std::numeric_limits<uint64_t>::max())
+                        && m_value->child(1)->child(1)->isInt(std::numeric_limits<uint64_t>::max()))
+                    || (m_value->type() == Int32
+                        && m_value->child(0)->child(1)->isInt(std::numeric_limits<uint32_t>::max())
+                        && m_value->child(1)->child(1)->isInt(std::numeric_limits<uint32_t>::max())))) {
+                Value* bitOr = m_insertionSet.insert<Value>(m_index, BitOr, m_value->origin(), m_value->child(0)->child(0), m_value->child(1)->child(0));
+                replaceWithNew<Value>(BitXor, m_value->origin(), bitOr, m_value->child(1)->child(1));
+                break;
+            }
+
+            // Turn this: BitAnd(BitXor(x, allOnes), c)
+            // Into this: BitXor(BitOr(x, ~c), allOnes)
+            // This is a variation on the previous optimization, treating c as if it were BitXor(~c, allOnes)
+            // It does not reduce the number of operations, but provides some normalization (we try to get BitXor by allOnes at the outermost point), and some chance to float Xors to a place where they might get eliminated.
+            if (m_value->child(0)->opcode() == BitXor
+                && m_value->child(1)->hasInt()
+                && ((m_value->type() == Int64
+                        && m_value->child(0)->child(1)->isInt(std::numeric_limits<uint64_t>::max()))
+                    || (m_value->type() == Int32
+                        && m_value->child(0)->child(1)->isInt(std::numeric_limits<uint32_t>::max())))) {
+                Value* bitOr = m_insertionSet.insert<Value>(m_index, BitOr, m_value->origin(), m_value->child(0)->child(0), m_value->child(1)->bitXorConstant(m_proc, m_value->child(0)->child(1)));
+                replaceWithNew<Value>(BitXor, m_value->origin(), bitOr, m_value->child(0)->child(1));
+                break;
+            }
+
             break;
 
         case BitOr:
@@ -1064,12 +1100,46 @@
 
             // Turn this: BitOr(value, all-ones)
             // Into this: all-ones.
-            if ((m_value->type() == Int64 && m_value->child(1)->isInt(0xffffffffffffffff))
-                || (m_value->type() == Int32 && m_value->child(1)->isInt(0xffffffff))) {
+            if ((m_value->type() == Int64 && m_value->child(1)->isInt(std::numeric_limits<uint64_t>::max()))
+                || (m_value->type() == Int32 && m_value->child(1)->isInt(std::numeric_limits<uint32_t>::max()))) {
                 replaceWithIdentity(m_value->child(1));
                 break;
             }
 
+            // Turn this: BitOr(BitXor(x1, allOnes), BitXor(x2, allOnes)
+            // Into this: BitXor(BitAnd(x1, x2), allOnes)
+            // By applying De Morgan laws
+            if (m_value->child(0)->opcode() == BitXor
+                && m_value->child(1)->opcode() == BitXor
+                && ((m_value->type() == Int64
+                        && m_value->child(0)->child(1)->isInt(std::numeric_limits<uint64_t>::max())
+                        && m_value->child(1)->child(1)->isInt(std::numeric_limits<uint64_t>::max()))
+                    || (m_value->type() == Int32
+                        && m_value->child(0)->child(1)->isInt(std::numeric_limits<uint32_t>::max())
+                        && m_value->child(1)->child(1)->isInt(std::numeric_limits<uint32_t>::max())))) {
+                Value* bitAnd = m_insertionSet.insert<Value>(m_index, BitAnd, m_value->origin(), m_value->child(0)->child(0), m_value->child(1)->child(0));
+                replaceWithNew<Value>(BitXor, m_value->origin(), bitAnd, m_value->child(1)->child(1));
+                break;
+            }
+
+            // Turn this: BitOr(BitXor(x, allOnes), c)
+            // Into this: BitXor(BitAnd(x, ~c), allOnes)
+            // This is a variation on the previous optimization, treating c as if it were BitXor(~c, allOnes)
+            // It does not reduce the number of operations, but provides some normalization (we try to get BitXor by allOnes at the outermost point), and some chance to float Xors to a place where they might get eliminated.
+            if (m_value->child(0)->opcode() == BitXor
+                && m_value->child(1)->hasInt()
+                && ((m_value->type() == Int64
+                        && m_value->child(0)->child(1)->isInt(std::numeric_limits<uint64_t>::max()))
+                    || (m_value->type() == Int32
+                        && m_value->child(0)->child(1)->isInt(std::numeric_limits<uint32_t>::max())))) {
+                Value* bitAnd = m_insertionSet.insert<Value>(m_index, BitAnd, m_value->origin(), m_value->child(0)->child(0), m_value->child(1)->bitXorConstant(m_proc, m_value->child(0)->child(1)));
+                replaceWithNew<Value>(BitXor, m_value->origin(), bitAnd, m_value->child(0)->child(1));
+                break;
+            }
+
+            if (handleBitAndDistributivity())
+                break;
+
             break;
 
         case BitXor:
@@ -1116,6 +1186,9 @@
                 replaceWithIdentity(m_value->child(0));
                 break;
             }
+                
+            if (handleBitAndDistributivity())
+                break;
 
             break;
 
@@ -2210,6 +2283,70 @@
         }
     }
 
+    // For Op==BitOr or BitXor, turn any of these:
+    //      Op(BitAnd(x1, x2), BitAnd(x1, x3))
+    //      Op(BitAnd(x2, x1), BitAnd(x1, x3))
+    //      Op(BitAnd(x1, x2), BitAnd(x3, x1))
+    //      Op(BitAnd(x2, x1), BitAnd(x3, x1))
+    // Into this: BitAnd(Op(x2, x3), x1)
+    // And any of these:
+    //      Op(BitAnd(x1, x2), x1)
+    //      Op(BitAnd(x2, x1), x1)
+    //      Op(x1, BitAnd(x1, x2))
+    //      Op(x1, BitAnd(x2, x1))
+    // Into this: BitAnd(Op(x2, x1), x1)
+    // This second set is equivalent to doing x1 => BitAnd(x1, x1), and then applying the first set.
+    // It does not reduce the number of operations executed, but provides some useful normalization: we prefer to have BitAnd at the outermost, then BitXor, and finally BitOr at the innermost
+    bool handleBitAndDistributivity()
+    {
+        ASSERT(m_value->opcode() == BitOr || m_value->opcode() == BitXor);
+        Value* x1 = nullptr;
+        Value* x2 = nullptr;
+        Value* x3 = nullptr;
+        if (m_value->child(0)->opcode() == BitAnd && m_value->child(1)->opcode() == BitAnd) {
+            if (m_value->child(0)->child(0) == m_value->child(1)->child(0)) {
+                x1 = m_value->child(0)->child(0);
+                x2 = m_value->child(0)->child(1);
+                x3 = m_value->child(1)->child(1);
+            } else if (m_value->child(0)->child(1) == m_value->child(1)->child(0)) {
+                x1 = m_value->child(0)->child(1);
+                x2 = m_value->child(0)->child(0);
+                x3 = m_value->child(1)->child(1);
+            } else if (m_value->child(0)->child(0) == m_value->child(1)->child(1)) {
+                x1 = m_value->child(0)->child(0);
+                x2 = m_value->child(0)->child(1);
+                x3 = m_value->child(1)->child(0);
+            } else if (m_value->child(0)->child(1) == m_value->child(1)->child(1)) {
+                x1 = m_value->child(0)->child(1);
+                x2 = m_value->child(0)->child(0);
+                x3 = m_value->child(1)->child(0);
+            }
+        } else if (m_value->child(0)->opcode() == BitAnd) {
+            if (m_value->child(0)->child(0) == m_value->child(1)) {
+                x1 = x3 = m_value->child(1);
+                x2 = m_value->child(0)->child(1);
+            } else if (m_value->child(0)->child(1) == m_value->child(1)) {
+                x1 = x3 = m_value->child(1);
+                x2 = m_value->child(0)->child(0);
+            }
+        } else if (m_value->child(1)->opcode() == BitAnd) {
+            if (m_value->child(1)->child(0) == m_value->child(0)) {
+                x1 = x3 = m_value->child(0);
+                x2 = m_value->child(1)->child(1);
+            } else if (m_value->child(1)->child(1) == m_value->child(0)) {
+                x1 = x3 = m_value->child(0);
+                x2 = m_value->child(1)->child(0);
+            }
+        }
+        if (x1 != nullptr) {
+            ASSERT(x2 != nullptr && x3 != nullptr);
+            Value* bitOp = m_insertionSet.insert<Value>(m_index, m_value->opcode(), m_value->origin(), x2, x3);
+            replaceWithNew<Value>(BitAnd, m_value->origin(), bitOp, x1);
+            return true;
+        }
+        return false;
+    }
+
     struct CanonicalizedComparison {
         Opcode opcode;
         Value* operands[2];

Modified: releases/WebKitGTK/webkit-2.24/Source/_javascript_Core/b3/testb3.cpp (242441 => 242442)


--- releases/WebKitGTK/webkit-2.24/Source/_javascript_Core/b3/testb3.cpp	2019-03-05 12:40:35 UTC (rev 242441)
+++ releases/WebKitGTK/webkit-2.24/Source/_javascript_Core/b3/testb3.cpp	2019-03-05 12:40:38 UTC (rev 242442)
@@ -2486,6 +2486,41 @@
     CHECK(compileAndRun<int64_t>(proc, a) == a);
 }
 
+void testBitAndNotNot(int64_t a, int64_t b)
+{
+    Procedure proc;
+    BasicBlock* root = proc.addBlock();
+    Value* argA = root->appendNew<ArgumentRegValue>(proc, Origin(), GPRInfo::argumentGPR0);
+    Value* argB = root->appendNew<ArgumentRegValue>(proc, Origin(), GPRInfo::argumentGPR1);
+    Value* notA = root->appendNew<Value>(proc, BitXor, Origin(), argA, root->appendNew<Const64Value>(proc, Origin(), -1));
+    Value* notB = root->appendNew<Value>(proc, BitXor, Origin(), argB, root->appendNew<Const64Value>(proc, Origin(), -1));
+    root->appendNewControlValue(
+        proc, Return, Origin(),
+        root->appendNew<Value>(
+            proc, BitAnd, Origin(),
+            notA,
+            notB));
+
+    CHECK_EQ(compileAndRun<int64_t>(proc, a, b), (~a & ~b));
+}
+
+void testBitAndNotImm(int64_t a, int64_t b)
+{
+    Procedure proc;
+    BasicBlock* root = proc.addBlock();
+    Value* argA = root->appendNew<ArgumentRegValue>(proc, Origin(), GPRInfo::argumentGPR0);
+    Value* notA = root->appendNew<Value>(proc, BitXor, Origin(), argA, root->appendNew<Const64Value>(proc, Origin(), -1));
+    Value* cstB = root->appendNew<Const64Value>(proc, Origin(), b);
+    root->appendNewControlValue(
+        proc, Return, Origin(),
+        root->appendNew<Value>(
+            proc, BitAnd, Origin(),
+            notA,
+            cstB));
+
+    CHECK_EQ(compileAndRun<int64_t>(proc, a, b), (~a & b));
+}
+
 void testBitAndImms(int64_t a, int64_t b)
 {
     Procedure proc;
@@ -2870,6 +2905,91 @@
     CHECK(compileAndRun<int64_t>(proc, a) == a);
 }
 
+void testBitOrAndAndArgs(int64_t a, int64_t b, int64_t c)
+{
+    // We want to check every possible ordering of arguments (to properly check every path in B3ReduceStrength):
+    // ((a & b) | (a & c))
+    // ((a & b) | (c & a))
+    // ((b & a) | (a & c))
+    // ((b & a) | (c & a))
+    for (int i = 0; i < 4; ++i) {
+        Procedure proc;
+        BasicBlock* root = proc.addBlock();
+        Value* argA = root->appendNew<ArgumentRegValue>(proc, Origin(), GPRInfo::argumentGPR0);
+        Value* argB = root->appendNew<ArgumentRegValue>(proc, Origin(), GPRInfo::argumentGPR1);
+        Value* argC = root->appendNew<ArgumentRegValue>(proc, Origin(), GPRInfo::argumentGPR2);
+        Value* andAB = i & 2 ? root->appendNew<Value>(proc, BitAnd, Origin(), argA, argB)
+            : root->appendNew<Value>(proc, BitAnd, Origin(), argB, argA);
+        Value* andAC = i & 1 ? root->appendNew<Value>(proc, BitAnd, Origin(), argA, argC)
+            : root->appendNew<Value>(proc, BitAnd, Origin(), argC, argA);
+        root->appendNewControlValue(
+            proc, Return, Origin(),
+            root->appendNew<Value>(
+                proc, BitOr, Origin(),
+                andAB,
+                andAC));
+
+        CHECK_EQ(compileAndRun<int64_t>(proc, a, b, c), ((a & b) | (a & c)));
+    }
+}
+
+void testBitOrAndSameArgs(int64_t a, int64_t b)
+{
+    // We want to check every possible ordering of arguments (to properly check every path in B3ReduceStrength):
+    // ((a & b) | a)
+    // ((b & a) | a)
+    // (a | (a & b))
+    // (a | (b & a))
+    for (int i = 0; i < 4; ++i) {
+        Procedure proc;
+        BasicBlock* root = proc.addBlock();
+        Value* argA = root->appendNew<ArgumentRegValue>(proc, Origin(), GPRInfo::argumentGPR0);
+        Value* argB = root->appendNew<ArgumentRegValue>(proc, Origin(), GPRInfo::argumentGPR1);
+        Value* andAB = i & 1 ? root->appendNew<Value>(proc, BitAnd, Origin(), argA, argB)
+            : root->appendNew<Value>(proc, BitAnd, Origin(), argB, argA);
+        Value* result = i & 2 ? root->appendNew<Value>(proc, BitOr, Origin(), andAB, argA)
+            : root->appendNew<Value>(proc, BitOr, Origin(), argA, andAB);
+        root->appendNewControlValue(proc, Return, Origin(), result);
+
+        CHECK_EQ(compileAndRun<int64_t>(proc, a, b), ((a & b) | a));
+    }
+}
+
+void testBitOrNotNot(int64_t a, int64_t b)
+{
+    Procedure proc;
+    BasicBlock* root = proc.addBlock();
+    Value* argA = root->appendNew<ArgumentRegValue>(proc, Origin(), GPRInfo::argumentGPR0);
+    Value* argB = root->appendNew<ArgumentRegValue>(proc, Origin(), GPRInfo::argumentGPR1);
+    Value* notA = root->appendNew<Value>(proc, BitXor, Origin(), argA, root->appendNew<Const64Value>(proc, Origin(), -1));
+    Value* notB = root->appendNew<Value>(proc, BitXor, Origin(), argB, root->appendNew<Const64Value>(proc, Origin(), -1));
+    root->appendNewControlValue(
+        proc, Return, Origin(),
+        root->appendNew<Value>(
+            proc, BitOr, Origin(),
+            notA,
+            notB));
+
+    CHECK_EQ(compileAndRun<int64_t>(proc, a, b), (~a | ~b));
+}
+
+void testBitOrNotImm(int64_t a, int64_t b)
+{
+    Procedure proc;
+    BasicBlock* root = proc.addBlock();
+    Value* argA = root->appendNew<ArgumentRegValue>(proc, Origin(), GPRInfo::argumentGPR0);
+    Value* notA = root->appendNew<Value>(proc, BitXor, Origin(), argA, root->appendNew<Const64Value>(proc, Origin(), -1));
+    Value* cstB = root->appendNew<Const64Value>(proc, Origin(), b);
+    root->appendNewControlValue(
+        proc, Return, Origin(),
+        root->appendNew<Value>(
+            proc, BitOr, Origin(),
+            notA,
+            cstB));
+    
+    CHECK_EQ(compileAndRun<int64_t>(proc, a, b), (~a | b));
+}
+
 void testBitOrImms(int64_t a, int64_t b)
 {
     Procedure proc;
@@ -3232,6 +3352,56 @@
     CHECK(!compileAndRun<int64_t>(proc, a));
 }
 
+void testBitXorAndAndArgs(int64_t a, int64_t b, int64_t c)
+{
+    // We want to check every possible ordering of arguments (to properly check every path in B3ReduceStrength):
+    // ((a & b) ^ (a & c))
+    // ((a & b) ^ (c & a))
+    // ((b & a) ^ (a & c))
+    // ((b & a) ^ (c & a))
+    for (int i = 0; i < 4; ++i) {
+        Procedure proc;
+        BasicBlock* root = proc.addBlock();
+        Value* argA = root->appendNew<ArgumentRegValue>(proc, Origin(), GPRInfo::argumentGPR0);
+        Value* argB = root->appendNew<ArgumentRegValue>(proc, Origin(), GPRInfo::argumentGPR1);
+        Value* argC = root->appendNew<ArgumentRegValue>(proc, Origin(), GPRInfo::argumentGPR2);
+        Value* andAB = i & 2 ? root->appendNew<Value>(proc, BitAnd, Origin(), argA, argB)
+            : root->appendNew<Value>(proc, BitAnd, Origin(), argB, argA);
+        Value* andAC = i & 1 ? root->appendNew<Value>(proc, BitAnd, Origin(), argA, argC)
+            : root->appendNew<Value>(proc, BitAnd, Origin(), argC, argA);
+        root->appendNewControlValue(
+            proc, Return, Origin(),
+            root->appendNew<Value>(
+                proc, BitXor, Origin(),
+                andAB,
+                andAC));
+
+        CHECK_EQ(compileAndRun<int64_t>(proc, a, b, c), ((a & b) ^ (a & c)));
+    }
+}
+
+void testBitXorAndSameArgs(int64_t a, int64_t b)
+{
+    // We want to check every possible ordering of arguments (to properly check every path in B3ReduceStrength):
+    // ((a & b) ^ a)
+    // ((b & a) ^ a)
+    // (a ^ (a & b))
+    // (a ^ (b & a))
+    for (int i = 0; i < 4; ++i) {
+        Procedure proc;
+        BasicBlock* root = proc.addBlock();
+        Value* argA = root->appendNew<ArgumentRegValue>(proc, Origin(), GPRInfo::argumentGPR0);
+        Value* argB = root->appendNew<ArgumentRegValue>(proc, Origin(), GPRInfo::argumentGPR1);
+        Value* andAB = i & 1 ? root->appendNew<Value>(proc, BitAnd, Origin(), argA, argB)
+            : root->appendNew<Value>(proc, BitAnd, Origin(), argB, argA);
+        Value* result = i & 2 ? root->appendNew<Value>(proc, BitXor, Origin(), andAB, argA)
+            : root->appendNew<Value>(proc, BitXor, Origin(), argA, andAB);
+        root->appendNewControlValue(proc, Return, Origin(), result);
+        
+        CHECK_EQ(compileAndRun<int64_t>(proc, a, b), ((a & b) ^ a));
+    }
+}
+
 void testBitXorImms(int64_t a, int64_t b)
 {
     Procedure proc;
@@ -16452,6 +16622,22 @@
                 }));                                        \
         }                                                   \
     }
+#define RUN_TERNARY(test, valuesA, valuesB, valuesC) \
+    for (auto a : valuesA) {                                    \
+        for (auto b : valuesB) {                                \
+            for (auto c : valuesC) {                            \
+                CString testStr = toCString(#test, "(", a.name, ", ", b.name, ",", c.name, ")"); \
+                if (!shouldRun(testStr.data()))                 \
+                    continue;                                   \
+                tasks.append(createSharedTask<void()>(          \
+                    [=] () {                                    \
+                        dataLog(toCString(testStr, "...\n"));   \
+                        test(a.value, b.value, c.value);        \
+                        dataLog(toCString(testStr, ": OK!\n")); \
+                    }));                                        \
+            }                                                   \
+        }                                                       \
+    }
 
 void run(const char* filter)
 {
@@ -16780,6 +16966,8 @@
     RUN_BINARY(testBitAndArgImmFloat, floatingPointOperands<float>(), floatingPointOperands<float>());
     RUN_BINARY(testBitAndImmsFloat, floatingPointOperands<float>(), floatingPointOperands<float>());
     RUN_BINARY(testBitAndArgsFloatWithUselessDoubleConversion, floatingPointOperands<float>(), floatingPointOperands<float>());
+    RUN_BINARY(testBitAndNotNot, int64Operands(), int64Operands());
+    RUN_BINARY(testBitAndNotImm, int64Operands(), int64Operands());
 
     RUN(testBitOrArgs(43, 43));
     RUN(testBitOrArgs(43, 0));
@@ -16842,6 +17030,10 @@
     RUN_BINARY(testBitOrArgImmFloat, floatingPointOperands<float>(), floatingPointOperands<float>());
     RUN_BINARY(testBitOrImmsFloat, floatingPointOperands<float>(), floatingPointOperands<float>());
     RUN_BINARY(testBitOrArgsFloatWithUselessDoubleConversion, floatingPointOperands<float>(), floatingPointOperands<float>());
+    RUN_TERNARY(testBitOrAndAndArgs, int64Operands(), int64Operands(), int64Operands());
+    RUN_BINARY(testBitOrAndSameArgs, int64Operands(), int64Operands());
+    RUN_BINARY(testBitOrNotNot, int64Operands(), int64Operands());
+    RUN_BINARY(testBitOrNotImm, int64Operands(), int64Operands());
 
     RUN_BINARY(testBitXorArgs, int64Operands(), int64Operands());
     RUN_UNARY(testBitXorSameArg, int64Operands());
@@ -16880,6 +17072,8 @@
     RUN(testBitXorImmBitXorArgImm32(7, 2, 3));
     RUN(testBitXorImmBitXorArgImm32(6, 1, 6));
     RUN(testBitXorImmBitXorArgImm32(24, 0xffff, 7));
+    RUN_TERNARY(testBitXorAndAndArgs, int64Operands(), int64Operands(), int64Operands());
+    RUN_BINARY(testBitXorAndSameArgs, int64Operands(), int64Operands());
 
     RUN_UNARY(testBitNotArg, int64Operands());
     RUN_UNARY(testBitNotImm, int64Operands());
_______________________________________________
webkit-changes mailing list
webkit-changes@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-changes

Reply via email to