vineetgarg02 commented on a change in pull request #1439:
URL: https://github.com/apache/hive/pull/1439#discussion_r479508618
##########
File path:
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/cost/HiveOnTezCostModel.java
##########
@@ -89,22 +89,23 @@ public RelOptCost getAggregateCost(HiveAggregate aggregate)
{
} else {
final RelMetadataQuery mq = aggregate.getCluster().getMetadataQuery();
// 1. Sum of input cardinalities
- final Double rCount = mq.getRowCount(aggregate.getInput());
- if (rCount == null) {
+ final Double inputRowCount = mq.getRowCount(aggregate.getInput());
+ final Double rowCount = mq.getRowCount(aggregate);
Review comment:
Can we change `rowCount` to `outputRowCount`? This will make the change
more readable.
##########
File path:
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/cost/HiveOnTezCostModel.java
##########
@@ -89,22 +89,23 @@ public RelOptCost getAggregateCost(HiveAggregate aggregate)
{
} else {
final RelMetadataQuery mq = aggregate.getCluster().getMetadataQuery();
// 1. Sum of input cardinalities
- final Double rCount = mq.getRowCount(aggregate.getInput());
- if (rCount == null) {
+ final Double inputRowCount = mq.getRowCount(aggregate.getInput());
+ final Double rowCount = mq.getRowCount(aggregate);
+ if (inputRowCount == null || rowCount == null) {
return null;
}
// 2. CPU cost = sorting cost
- final double cpuCost = algoUtils.computeSortCPUCost(rCount);
+ final double cpuCost = algoUtils.computeSortCPUCost(rowCount) +
inputRowCount * algoUtils.getCpuUnitCost();
// 3. IO cost = cost of writing intermediary results to local FS +
// cost of reading from local FS for transferring to GBy +
// cost of transferring map outputs to GBy operator
final Double rAverageSize = mq.getAverageRowSize(aggregate.getInput());
if (rAverageSize == null) {
return null;
}
- final double ioCost = algoUtils.computeSortIOCost(new
Pair<Double,Double>(rCount,rAverageSize));
+ final double ioCost = algoUtils.computeSortIOCost(new Pair<Double,
Double>(rowCount, rAverageSize));
Review comment:
`rAverageSize` is based on input row count but `rowCount` is output row
count. Is this intended or should average row size be computed based on output
row count?
##########
File path:
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveAggregateJoinTransposeRule.java
##########
@@ -303,6 +305,90 @@ public void onMatch(RelOptRuleCall call) {
}
}
+ /**
+ * Determines weather the give grouping is unique.
+ *
+ * Consider a join which might produce non-unique rows; but later the
results are aggregated again.
+ * This method determines if there are sufficient columns in the grouping
which have been present previously as unique column(s).
+ */
+ private boolean isGroupingUnique(RelNode input, ImmutableBitSet groups) {
+ if (groups.isEmpty()) {
+ return false;
+ }
+ RelMetadataQuery mq = input.getCluster().getMetadataQuery();
+ Set<ImmutableBitSet> uKeys = mq.getUniqueKeys(input);
Review comment:
If the purpose of this method is to determine that given a set of
columns are unique or not you can use `areColumnsUnique` as @jcamachor
suggested.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]