danielcweeks commented on code in PR #7731:
URL: https://github.com/apache/iceberg/pull/7731#discussion_r1209476557
##########
core/src/main/java/org/apache/iceberg/util/TableScanUtil.java:
##########
@@ -79,6 +88,44 @@ public static CloseableIterable<FileScanTask> splitFiles(
return CloseableIterable.combine(splitTasks, tasks);
}
+ /**
+ * Produces {@link CombinedScanTask combined tasks} from an iterable of
{@link FileScanTask file
+ * tasks}, using an adaptive target split size that targets a minimum number
of tasks
+ * (parallelism).
+ *
+ * @param files incoming iterable of file tasks
+ * @param parallelism target minimum number of tasks
+ * @param splitSize target split size
+ * @param lookback bin packing lookback
+ * @param openFileCost minimum file cost
+ * @return an iterable of combined tasks
+ */
+ public static CloseableIterable<CombinedScanTask> planTasksAdaptive(
+ CloseableIterable<FileScanTask> files,
+ int parallelism,
+ long splitSize,
+ int lookback,
+ long openFileCost) {
+
+ validatePlanningArguments(splitSize, lookback, openFileCost);
+
+ Function<FileScanTask, Long> weightFunc =
+ file ->
+ Math.max(
Review Comment:
I'm a little confused by the way we're calculating this weight function. It
looks like we're taking the larger of:
1. the total file size (including deletes)
2. the total number of files * open cost
Shouldn't the the weight of the task be the total file size plus the open
cost for each file? With 1. above we're not including the open cost and in 2.
we're not including the data to be read.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]