bryanck commented on code in PR #15059:
URL: https://github.com/apache/iceberg/pull/15059#discussion_r2705953120
##########
spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/source/SparkMicroBatchStream.java:
##########
@@ -316,182 +243,43 @@ private static StreamingOffset
determineStartingOffset(Table table, Long fromTim
}
}
- private static int getMaxFiles(ReadLimit readLimit) {
- if (readLimit instanceof ReadMaxFiles) {
- return ((ReadMaxFiles) readLimit).maxFiles();
- }
-
- if (readLimit instanceof CompositeReadLimit) {
- // We do not expect a CompositeReadLimit to contain a nested
CompositeReadLimit.
- // In fact, it should only be a composite of two or more of ReadMinRows,
ReadMaxRows and
- // ReadMaxFiles, with no more than one of each.
- ReadLimit[] limits = ((CompositeReadLimit) readLimit).getReadLimits();
- for (ReadLimit limit : limits) {
- if (limit instanceof ReadMaxFiles) {
- return ((ReadMaxFiles) limit).maxFiles();
- }
+ @Override
+ public Map<String, String> metrics(Optional<Offset> latestConsumedOffset) {
Review Comment:
It seems like this is only useful for the async planner, for the sync
planner it will report the same thing.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]