szehon-ho commented on code in PR #11982:
URL: https://github.com/apache/iceberg/pull/11982#discussion_r1925848385
##########
spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/actions/RewriteTablePathSparkAction.java:
##########
@@ -728,4 +724,22 @@ private String getMetadataLocation(Table tbl) {
!metadataDir.isEmpty(), "Failed to get the metadata file root
directory");
return metadataDir;
}
+
+ @VisibleForTesting
+ Broadcast<Table> tableBroadcast() {
+ if (tblBroadcast == null) {
+ this.tblBroadcast =
sparkContext().broadcast(SerializableTableWithSize.copyOf(table));
+ }
+
+ return tblBroadcast;
+ }
+
+ @VisibleForTesting
+ Broadcast<Map<Integer, PartitionSpec>> specsBroadcast(Map<Integer,
PartitionSpec> specsById) {
Review Comment:
yea good catch, looks like a missed opportunity in the original
implementation, im ok to clean it up in a subsequent pr if its more involved,
up to @manuzhang
##########
spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/actions/RewriteTablePathSparkAction.java:
##########
@@ -728,4 +724,22 @@ private String getMetadataLocation(Table tbl) {
!metadataDir.isEmpty(), "Failed to get the metadata file root
directory");
return metadataDir;
}
+
+ @VisibleForTesting
Review Comment:
Thanks for catching this bug, you probably tried but i am wondering is there
no other way to force this exception except having this extra visibleForTesting
method? The code doesnt try to de-serialize it in a normal rewriteTablePath
call?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]