suryaprasanna commented on code in PR #8900:
URL: https://github.com/apache/hudi/pull/8900#discussion_r1223865818


##########
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/metadata/HoodieMetadataWriteUtils.java:
##########
@@ -111,6 +112,10 @@ public static HoodieWriteConfig createMetadataWriteConfig(
             // deltacommits having corresponding completed commits. Therefore, 
we need to compact all fileslices of all
             // partitions together requiring UnBoundedCompactionStrategy.
             .withCompactionStrategy(new UnBoundedCompactionStrategy())
+            // Check if log compaction is enabled, this is needed for tables 
with lot of records.
+            .withLogCompactionEnabled(writeConfig.isLogCompactionEnabled())
+            // This config is only used if enableLogCompactionForMetadata is 
set.

Review Comment:
   Fixed the comment.



##########
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/metadata/HoodieBackedTableMetadataWriter.java:
##########
@@ -1021,17 +1023,46 @@ private void 
runPendingTableServicesOperations(BaseHoodieWriteClient writeClient
    * deltacommit.
    */
   protected void compactIfNecessary(BaseHoodieWriteClient writeClient, String 
latestDeltacommitTime) {
+
+    // Check if there are any pending compaction or log compaction instants in 
the timeline.
+    // If pending compact/logcompaction operations are found abort scheduling 
new compaction/logcompaction operations.
+    Option<HoodieInstant> pendingLogCompactionInstant =
+        
metadataMetaClient.getActiveTimeline().filterPendingLogCompactionTimeline().firstInstant();

Review Comment:
   Test for various cases like creation of compaction plan when logcompaction 
and vice versa are present in TestHoodieClientOnMergeOnReadStorage.



##########
hudi-spark-datasource/hudi-spark/src/main/java/org/apache/hudi/cli/ArchiveExecutorUtils.java:
##########
@@ -57,6 +57,15 @@ public static int archive(JavaSparkContext jsc,
         .build();
     HoodieEngineContext context = new HoodieSparkEngineContext(jsc);
     HoodieSparkTable<HoodieAvroPayload> table = 
HoodieSparkTable.create(config, context);
+
+    // Check if the metadata is already initialized. If it is initialize 
ignore the input arguments enableMetadata.

Review Comment:
   Reverting these changes.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to