Neer393 commented on code in PR #6039:
URL: https://github.com/apache/hive/pull/6039#discussion_r2292722055
##########
iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/compaction/IcebergTableOptimizer.java:
##########
@@ -85,52 +85,53 @@ public IcebergTableOptimizer(HiveConf conf, TxnStore
txnHandler, MetadataCache m
*/
@Override
public Set<CompactionInfo> findPotentialCompactions(long lastChecked,
ShowCompactResponse currentCompactions,
- Set<String> skipDBs, Set<String> skipTables) {
+ Set<String> skipDBs,
Set<String> skipTables) {
Set<CompactionInfo> compactionTargets = Sets.newHashSet();
- getTableNames().stream()
- .filter(table -> !skipDBs.contains(table.getDb()))
- .filter(table -> !skipTables.contains(table.getNotEmptyDbTable()))
- .map(table -> {
- org.apache.hadoop.hive.ql.metadata.Table hiveTable =
getHiveTable(table.getDb(), table.getTable());
- org.apache.iceberg.Table icebergTable =
IcebergTableUtil.getTable(conf, hiveTable.getTTable());
- return Pair.of(hiveTable, icebergTable);
- })
- .filter(t -> hasNewCommits(t.getRight(),
- snapshotTimeMilCache.get(t.getLeft().getFullyQualifiedName())))
- .forEach(t -> {
- String qualifiedTableName = t.getLeft().getFullyQualifiedName();
- org.apache.hadoop.hive.ql.metadata.Table hiveTable = t.getLeft();
- org.apache.iceberg.Table icebergTable = t.getRight();
-
- if (icebergTable.spec().isPartitioned()) {
- List<org.apache.hadoop.hive.ql.metadata.Partition> partitions =
findModifiedPartitions(hiveTable,
- icebergTable, snapshotTimeMilCache.get(qualifiedTableName),
true);
-
- partitions.forEach(partition ->
addCompactionTargetIfEligible(hiveTable.getTTable(), icebergTable,
- partition.getName(), compactionTargets, currentCompactions,
skipDBs, skipTables));
-
- if (IcebergTableUtil.hasUndergonePartitionEvolution(icebergTable)
&& !findModifiedPartitions(hiveTable,
- icebergTable, snapshotTimeMilCache.get(qualifiedTableName),
false).isEmpty()) {
- addCompactionTargetIfEligible(hiveTable.getTTable(),
icebergTable,
- null, compactionTargets, currentCompactions, skipDBs,
skipTables);
- }
- } else {
- addCompactionTargetIfEligible(hiveTable.getTTable(), icebergTable,
null, compactionTargets,
- currentCompactions, skipDBs, skipTables);
- }
-
- snapshotTimeMilCache.put(qualifiedTableName,
icebergTable.currentSnapshot().timestampMillis());
- });
+ Iterable<Table> tables = getTables();
Review Comment:
The current implementation uses metadataCache so as to reduce the number of
msc calls because here they were making one msc call per table for getting the
HMS API Table object.
Now since using TableIterator, we have already reduced the number of msc
calls, metadataCache does not seem to fit into the picture as regardless of
whether we store the HMS API Table object in cache or not, TableFetcher will
fetch it for us. So I don't see a point storing table object.
Please correct me if I am wrong
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]