RussellSpitzer commented on PR #15150:
URL: https://github.com/apache/iceberg/pull/15150#issuecomment-3985937898
I think those edge cases apply to the inline code I posted in the comment
(the orderingSatisfies approach), not the actual commit at
RussellSpitzer@c48c6a6. I abandoned that approach and Instead tried to boil it
down to one method.
outputSortOrderId(writeRequirements) just checks: did we ask Spark to sort?
If hasOrdering() is true we know it came from the table sort order (that's what
SparkWriteUtil builds), so we use table.sortOrder().orderId(). No suffix
matching, no iterating table sort orders.
For rewrites, the OUTPUT_SORT_ORDER_ID path is the same as yours. This
avoids the class extension for SparkRequirements and removes the new
requirements for the Sort Compaction case.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]