rdblue commented on code in PR #7731:
URL: https://github.com/apache/iceberg/pull/7731#discussion_r1209479290


##########
core/src/main/java/org/apache/iceberg/util/TableScanUtil.java:
##########
@@ -79,6 +88,44 @@ public static CloseableIterable<FileScanTask> splitFiles(
     return CloseableIterable.combine(splitTasks, tasks);
   }
 
+  /**
+   * Produces {@link CombinedScanTask combined tasks} from an iterable of 
{@link FileScanTask file
+   * tasks}, using an adaptive target split size that targets a minimum number 
of tasks
+   * (parallelism).
+   *
+   * @param files incoming iterable of file tasks
+   * @param parallelism target minimum number of tasks
+   * @param splitSize target split size
+   * @param lookback bin packing lookback
+   * @param openFileCost minimum file cost
+   * @return an iterable of combined tasks
+   */
+  public static CloseableIterable<CombinedScanTask> planTasksAdaptive(
+      CloseableIterable<FileScanTask> files,
+      int parallelism,
+      long splitSize,
+      int lookback,
+      long openFileCost) {
+
+    validatePlanningArguments(splitSize, lookback, openFileCost);
+
+    Function<FileScanTask, Long> weightFunc =
+        file ->
+            Math.max(

Review Comment:
   This is the way it was before, but I also flagged this as something to look 
into.
   
   I think we should be doing the `Math.max(fileSize, openCost)` for every file 
individually. Otherwise we could overpack small delete files because they 
aren't large compared to the size of data files.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to