kasakrisz commented on code in PR #5717:
URL: https://github.com/apache/hive/pull/5717#discussion_r2084523828


##########
ql/src/java/org/apache/hadoop/hive/ql/optimizer/SharedWorkOptimizer.java:
##########
@@ -1427,12 +1426,29 @@ private static SharedResult 
extractSharedOptimizationInfoForRoot(ParseContext pc
     if (equalOp1.getNumChild() > 1 || equalOp2.getNumChild() > 1) {
       // TODO: Support checking multiple child operators to merge further.
       discardableInputOps.addAll(gatherDPPBranchOps(pctx, optimizerCache, 
discardableOps));
-      return new SharedResult(retainableOps, discardableOps, 
discardableInputOps,
-          dataSize, maxDataSize);
+
+      // Accumulate InMemoryDataSize of unmerged MapJoin operators.
+      Set<Operator<?>> opsWork1 = findWorkOperators(optimizerCache, 
retainableTsOp);
+      for (Operator<?> op : opsWork1) {
+        if (op instanceof MapJoinOperator) {
+          MapJoinOperator mop = (MapJoinOperator) op;
+          dataSize = StatsUtils.safeAdd(dataSize, 
mop.getConf().getInMemoryDataSize());
+          maxDataSize = 
mop.getConf().getMemoryMonitorInfo().getAdjustedNoConditionalTaskSize();

Review Comment:
   I debugged the test 
   ```
   mvn test -Dtest=TestMiniLlapLocalCliDriver 
-Dqfile=sharedwork_mapjoin_datasize_check.q -pl itests/qtest 
-Dmaven.surefire.debug -Pitests
   ```
   but this code was never hit.
   
   Maybe I'm missing something but could you please share how the new test 
`sharedwork_mapjoin_datasize_check.q` covers this part?



##########
ql/src/java/org/apache/hadoop/hive/ql/optimizer/SharedWorkOptimizer.java:
##########
@@ -1474,8 +1489,29 @@ private static SharedResult 
extractSharedOptimizationInfoForRoot(ParseContext pc
         discardableInputOps.addAll(gatherDPPBranchOps(pctx, optimizerCache, 
discardableOps));
         discardableInputOps.addAll(gatherDPPBranchOps(pctx, optimizerCache, 
retainableOps,
             discardableInputOps));
-        return new SharedResult(retainableOps, discardableOps, 
discardableInputOps,
-            dataSize, maxDataSize);
+        bailOut = true;
+      }
+
+      if (bailOut) {
+        // Accumulate InMemoryDataSize of unmerged MapJoin operators.
+        Set<Operator<?>> opsWork1 = findWorkOperators(optimizerCache, 
currentOp1);
+        for (Operator<?> op : opsWork1) {
+          if (op instanceof MapJoinOperator) {
+            MapJoinOperator mop = (MapJoinOperator) op;
+            dataSize = StatsUtils.safeAdd(dataSize, 
mop.getConf().getInMemoryDataSize());
+            maxDataSize = 
mop.getConf().getMemoryMonitorInfo().getAdjustedNoConditionalTaskSize();
+          }
+        }
+        Set<Operator<?>> opsWork2 = findWorkOperators(optimizerCache, 
currentOp2);
+        for (Operator<?> op : opsWork2) {
+          if (op instanceof MapJoinOperator) {
+            MapJoinOperator mop = (MapJoinOperator) op;
+            dataSize = StatsUtils.safeAdd(dataSize, 
mop.getConf().getInMemoryDataSize());
+            maxDataSize = 
mop.getConf().getMemoryMonitorInfo().getAdjustedNoConditionalTaskSize();
+          }

Review Comment:
   This logic already exists in lines 1430-1446. Could you please extract it to 
a method and remove the variable `bailOut` and call that method.



##########
ql/src/java/org/apache/hadoop/hive/ql/optimizer/SharedWorkOptimizer.java:
##########
@@ -1427,12 +1426,29 @@ private static SharedResult 
extractSharedOptimizationInfoForRoot(ParseContext pc
     if (equalOp1.getNumChild() > 1 || equalOp2.getNumChild() > 1) {
       // TODO: Support checking multiple child operators to merge further.
       discardableInputOps.addAll(gatherDPPBranchOps(pctx, optimizerCache, 
discardableOps));
-      return new SharedResult(retainableOps, discardableOps, 
discardableInputOps,
-          dataSize, maxDataSize);
+
+      // Accumulate InMemoryDataSize of unmerged MapJoin operators.
+      Set<Operator<?>> opsWork1 = findWorkOperators(optimizerCache, 
retainableTsOp);
+      for (Operator<?> op : opsWork1) {
+        if (op instanceof MapJoinOperator) {
+          MapJoinOperator mop = (MapJoinOperator) op;
+          dataSize = StatsUtils.safeAdd(dataSize, 
mop.getConf().getInMemoryDataSize());
+          maxDataSize = 
mop.getConf().getMemoryMonitorInfo().getAdjustedNoConditionalTaskSize();
+        }
+      }
+      Set<Operator<?>> opsWork2 = findWorkOperators(optimizerCache, 
discardableTsOp);
+      for (Operator<?> op : opsWork2) {
+        if (op instanceof MapJoinOperator) {
+          MapJoinOperator mop = (MapJoinOperator) op;
+          dataSize = StatsUtils.safeAdd(dataSize, 
mop.getConf().getInMemoryDataSize());
+          maxDataSize = 
mop.getConf().getMemoryMonitorInfo().getAdjustedNoConditionalTaskSize();

Review Comment:
   `maxDataSize` is overwriten every time a `MapJoinOperator` is found. Also in 
line 1436. What if the first `MapJoinOperator` conf has a bigger number and the 
2nd one has a smaller? In that case the smaller number is kept. Is this 
expected?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to