c2zwdjnlcg commented on code in PR #15239:
URL: https://github.com/apache/iceberg/pull/15239#discussion_r2874284156
##########
docs/docs/spark-configuration.md:
##########
@@ -220,6 +220,7 @@ spark.read
| stream-from-timestamp | (none) | A timestamp in milliseconds to stream from;
if before the oldest known ancestor snapshot, the oldest will be used
|
| streaming-max-files-per-micro-batch | INT_MAX | Maximum number of files per
microbatch
|
| streaming-max-rows-per-micro-batch | INT_MAX | "Soft maximum" number of
rows per microbatch; always includes all rows in next unprocessed file,
excludes additional files if their inclusion would exceed the soft max limit |
+| streaming-checkpoint-use-hadoop | false | Use Hadoop FileSystem for
streaming checkpoint operations instead of the table's FileIO implementation
|
Review Comment:
Makes sense to me, but @danielcweeks on you onboard with that? If so I can
make the changes.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]