difin commented on code in PR #5540:
URL: https://github.com/apache/hive/pull/5540#discussion_r1955001006
##########
iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/compaction/IcebergQueryCompactor.java:
##########
@@ -62,20 +63,32 @@ public boolean run(CompactorContext context) throws
IOException, HiveException,
Table icebergTable = IcebergTableUtil.getTable(conf, table.getTTable());
String compactionQuery;
String orderBy = ci.orderByClause == null ? "" : ci.orderByClause;
+ String fileSizeCond = null;
+
+ if (ci.type == CompactionType.MINOR) {
+ long fileSizeInBytesThreshold =
IcebergCompactionUtil.getFileSizeThreshold(ci, conf);
+ fileSizeCond = String.format("%1$s in (select file_path from %2$s.files
where file_size_in_bytes < %3$d)",
+ VirtualColumn.FILE_PATH.getName(), compactTableName,
fileSizeInBytesThreshold);
+ conf.setLong(CompactorContext.COMPACTION_FILE_SIZE_THRESHOLD,
fileSizeInBytesThreshold);
Review Comment:
It is needed for commit time in HiveIcebergOutputCommitter.
We use RewriteFiles API to rewrite old data and delete files with new data
files.
When searching for old data files we need to know the same file size
threshold that was used in the compaction query to find old data files that
need to be rewritten.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]