danny0405 commented on code in PR #13383:
URL: https://github.com/apache/hudi/pull/13383#discussion_r2128063540


##########
hudi-spark-datasource/hudi-spark/src/test/java/org/apache/hudi/client/functional/TestMetadataUtilRLIandSIRecordGeneration.java:
##########
@@ -701,4 +709,36 @@ private void 
parseRecordKeysFromBaseFiles(List<WriteStatus> writeStatuses, Map<S
       }
     });
   }
+
+  Set<String> getRecordKeys(String partition, String baseInstantTime, String 
fileId, List<StoragePath> logFilePaths, HoodieTableMetaClient datasetMetaClient,
+                                   Option<Schema> writerSchemaOpt, String 
latestCommitTimestamp) throws IOException {
+    if (writerSchemaOpt.isPresent()) {
+      // read log file records without merging
+      FileSlice fileSlice = new FileSlice(partition, baseInstantTime, fileId);
+      logFilePaths.forEach(logFilePath -> {
+        HoodieLogFile logFile = new HoodieLogFile(logFilePath);
+        fileSlice.addLogFile(logFile);
+      });
+      TypedProperties properties = new TypedProperties();
+      // configure un-merged log file reader
+      properties.setProperty(HoodieReaderConfig.MERGE_TYPE.key(), 
REALTIME_SKIP_MERGE);

Review Comment:
   if the purpose is to collect all the keys, using merge reader with 
`emitDelete` as true should also work, so that we can avoid the unnecessary 
changes in `HoodieUnMergedLogRecordScanner`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to