[GitHub] [hudi] nsivabalan commented on a diff in pull request #8900: [HUDI-6334] Integrate logcompaction table service to metadata table and provides various bugfixes to metadata table

2023-06-08 Thread via GitHub


nsivabalan commented on code in PR #8900:
URL: https://github.com/apache/hudi/pull/8900#discussion_r1223876484


##
hudi-common/src/main/java/org/apache/hudi/metadata/HoodieTableMetadataUtil.java:
##
@@ -1325,6 +1332,39 @@ public static Set 
getInflightAndCompletedMetadataPartitions(HoodieTableC
 return inflightAndCompletedPartitions;
   }
 
+  public static Set getValidInstantTimestamps(HoodieTableMetaClient 
dataMetaClient,
+  HoodieTableMetaClient 
metadataMetaClient) {
+// Only those log files which have a corresponding completed instant on 
the dataset should be read
+// This is because the metadata table is updated before the dataset 
instants are committed.
+HoodieActiveTimeline datasetTimeline = dataMetaClient.getActiveTimeline();
+Set validInstantTimestamps = 
datasetTimeline.filterCompletedInstants().getInstantsAsStream()
+.map(HoodieInstant::getTimestamp).collect(Collectors.toSet());
+
+// We should also add completed indexing delta commits in the metadata 
table, as they do not
+// have corresponding completed instant in the data table
+validInstantTimestamps.addAll(
+metadataMetaClient.getActiveTimeline()
+.filter(instant -> instant.isCompleted()
+&& (isIndexingCommit(instant.getTimestamp()) || 
isLogCompactionInstant(instant)))
+.getInstantsAsStream()
+.map(HoodieInstant::getTimestamp)
+.collect(Collectors.toList()));
+
+// For any rollbacks and restores, we cannot neglect the instants that 
they are rolling back.
+// The rollback instant should be more recent than the start of the 
timeline for it to have rolled back any
+// instant which we have a log block for.
+final String earliestInstantTime = validInstantTimestamps.isEmpty() ? 
SOLO_COMMIT_TIMESTAMP : Collections.min(validInstantTimestamps);
+
datasetTimeline.getRollbackAndRestoreTimeline().filterCompletedInstants().getInstantsAsStream()
+.filter(instant -> 
HoodieTimeline.compareTimestamps(instant.getTimestamp(), 
HoodieTimeline.GREATER_THAN, earliestInstantTime))
+.forEach(instant -> {
+  validInstantTimestamps.addAll(getRollbackedCommits(instant, 
datasetTimeline));
+});
+
+// SOLO_COMMIT_TIMESTAMP is used during bootstrap so it is a valid 
timestamp
+validInstantTimestamps.add(SOLO_COMMIT_TIMESTAMP);

Review Comment:
   yeah. good catch. we need to account for that



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hudi] nsivabalan commented on a diff in pull request #8900: [HUDI-6334] Integrate logcompaction table service to metadata table and provides various bugfixes to metadata table

2023-06-08 Thread via GitHub


nsivabalan commented on code in PR #8900:
URL: https://github.com/apache/hudi/pull/8900#discussion_r1223854254


##
hudi-client/hudi-spark-client/src/test/java/org/apache/hudi/client/functional/TestHoodieClientOnMergeOnReadStorage.java:
##
@@ -314,7 +314,7 @@ public void 
testSchedulingCompactionAfterSchedulingLogCompaction() throws Except
 
 // Try scheduling compaction, it wont succeed
 Option compactionTimeStamp = 
client.scheduleCompaction(Option.empty());
-assertFalse(compactionTimeStamp.isPresent());

Review Comment:
   do we know the reason why we had to flip. 



##
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/metadata/HoodieMetadataWriteUtils.java:
##
@@ -111,6 +112,10 @@ public static HoodieWriteConfig createMetadataWriteConfig(
 // deltacommits having corresponding completed commits. Therefore, 
we need to compact all fileslices of all
 // partitions together requiring UnBoundedCompactionStrategy.
 .withCompactionStrategy(new UnBoundedCompactionStrategy())
+// Check if log compaction is enabled, this is needed for tables 
with lot of records.
+.withLogCompactionEnabled(writeConfig.isLogCompactionEnabled())
+// This config is only used if enableLogCompactionForMetadata is 
set.

Review Comment:
   not sure I get your comment here "This config is only used if 
enableLogCompactionForMetadata is set". from the code, it looks like we fetch 
from  writeConfig.isLogCompactionEnabled().



##
hudi-flink-datasource/hudi-flink/src/test/java/org/apache/hudi/sink/TestStreamWriteOperatorCoordinator.java:
##
@@ -233,48 +233,49 @@ void testSyncMetadataTable() throws Exception {
 assertThat(completedTimeline.lastInstant().get().getTimestamp(), 
startsWith(HoodieTableMetadata.SOLO_COMMIT_TIMESTAMP));
 
 // test metadata table compaction
-// write another 4 commits
-for (int i = 1; i < 5; i++) {
+// write another 9 commits to trigger compaction twice. Since default 
clean version to retain is 2.

Review Comment:
   @danny0405 : can you review changes in flink classes.



##
hudi-spark-datasource/hudi-spark/src/main/java/org/apache/hudi/cli/ArchiveExecutorUtils.java:
##
@@ -57,6 +57,15 @@ public static int archive(JavaSparkContext jsc,
 .build();
 HoodieEngineContext context = new HoodieSparkEngineContext(jsc);
 HoodieSparkTable table = 
HoodieSparkTable.create(config, context);
+
+// Check if the metadata is already initialized. If it is initialize 
ignore the input arguments enableMetadata.

Review Comment:
   are these required ? 



##
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/metadata/HoodieBackedTableMetadataWriter.java:
##
@@ -1021,17 +1023,46 @@ private void 
runPendingTableServicesOperations(BaseHoodieWriteClient writeClient
* deltacommit.
*/
   protected void compactIfNecessary(BaseHoodieWriteClient writeClient, String 
latestDeltacommitTime) {
+
+// Check if there are any pending compaction or log compaction instants in 
the timeline.
+// If pending compact/logcompaction operations are found abort scheduling 
new compaction/logcompaction operations.
+Option pendingLogCompactionInstant =
+
metadataMetaClient.getActiveTimeline().filterPendingLogCompactionTimeline().firstInstant();

Review Comment:
   do we have tests for these? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org