deniskuzZ commented on code in PR #4043:
URL: https://github.com/apache/hive/pull/4043#discussion_r1389415882
##########
ql/src/java/org/apache/hadoop/hive/ql/optimizer/DynamicPartitionPruningOptimization.java:
##########
@@ -678,38 +678,34 @@ private boolean
generateSemiJoinOperatorPlan(DynamicListContext ctx, ParseContex
ArrayList<ColumnInfo> groupbyColInfos = new ArrayList<ColumnInfo>();
groupbyColInfos.add(new ColumnInfo(gbOutputNames.get(0),
key.getTypeInfo(), "", false));
groupbyColInfos.add(new ColumnInfo(gbOutputNames.get(1),
key.getTypeInfo(), "", false));
- groupbyColInfos.add(new ColumnInfo(gbOutputNames.get(2),
key.getTypeInfo(), "", false));
+ groupbyColInfos.add(new ColumnInfo(gbOutputNames.get(2),
TypeInfoFactory.binaryTypeInfo, "", false));
GroupByOperator groupByOp =
(GroupByOperator)OperatorFactory.getAndMakeChild(
groupBy, new RowSchema(groupbyColInfos), selectOp);
groupByOp.setColumnExprMap(new HashMap<String, ExprNodeDesc>());
// Get the column names of the aggregations for reduce sink
- int colPos = 0;
ArrayList<ExprNodeDesc> rsValueCols = new ArrayList<ExprNodeDesc>();
Map<String, ExprNodeDesc> columnExprMap = new HashMap<String,
ExprNodeDesc>();
- for (int i = 0; i < aggs.size() - 1; i++) {
- ExprNodeColumnDesc colExpr = new ExprNodeColumnDesc(key.getTypeInfo(),
- gbOutputNames.get(colPos), "", false);
+ ArrayList<ColumnInfo> rsColInfos = new ArrayList<>();
+ for (int colPos = 0; colPos < aggs.size(); colPos++) {
+ TypeInfo typInfo = groupbyColInfos.get(colPos).getType();
+ ExprNodeColumnDesc colExpr = new ExprNodeColumnDesc(typInfo,
gbOutputNames.get(colPos), "", false);
rsValueCols.add(colExpr);
- columnExprMap.put(gbOutputNames.get(colPos), colExpr);
- colPos++;
- }
+ columnExprMap.put(Utilities.ReduceField.VALUE + "." +
gbOutputNames.get(colPos), colExpr);
- // Bloom Filter uses binary
- ExprNodeColumnDesc colExpr = new
ExprNodeColumnDesc(TypeInfoFactory.binaryTypeInfo,
- gbOutputNames.get(colPos), "", false);
- rsValueCols.add(colExpr);
- columnExprMap.put(gbOutputNames.get(colPos), colExpr);
- colPos++;
+ ColumnInfo colInfo =
Review Comment:
thank you @kgyrtkirk for chiming in!
I don't have much expertise in that area. Small question:
When in FinalRsForSemiJoinOp we don't prefix columns with ReduceField.VALUE
(columnExprMap & schema) then ParallelEdgeFixer kicks in and introduces a
concentrator RS that expects [key, value] inputs.
````
Caused by: java.lang.RuntimeException: cannot find field _col0 from [0:key,
1:value]
at
org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.getStandardStructFieldRef(ObjectInspectorUtils.java:550)
at
org.apache.hadoop.hive.serde2.objectinspector.StandardStructObjectInspector.getStructFieldRef(StandardStructObjectInspector.java:153)
at
org.apache.hadoop.hive.ql.exec.ExprNodeColumnEvaluator.initialize(ExprNodeColumnEvaluator.java:56)
at
org.apache.hadoop.hive.ql.exec.Operator.initEvaluators(Operator.java:1073)
at
org.apache.hadoop.hive.ql.exec.Operator.initEvaluatorsAndReturnStruct(Operator.java:1099)
at
org.apache.hadoop.hive.ql.exec.SelectOperator.initializeOp(SelectOperator.java:74)
at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:360)
at
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.init(ReduceRecordProcessor.java:191)
at
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:292)
````
Should we prefix the columns in FinalRsForSemiJoinOp or make some fixes in
fixParallelEdge SEL inputs?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]