yihua commented on code in PR #13261:
URL: https://github.com/apache/hudi/pull/13261#discussion_r2094019859


##########
hudi-utilities/src/main/java/org/apache/hudi/utilities/streamer/StreamSync.java:
##########
@@ -466,14 +466,14 @@ public Pair<Option<String>, JavaRDD<WriteStatus>> 
syncOnce() throws IOException
     try {
       // Refresh Timeline
       HoodieTableMetaClient metaClient = 
initializeMetaClientAndRefreshTimeline();
-      String instantTime = metaClient.createNewInstantTime();
 
-      Pair<InputBatch, Boolean> inputBatchAndUseRowWriter = 
readFromSource(instantTime, metaClient);
+      Pair<InputBatch, Boolean> inputBatchAndUseRowWriter = 
readFromSource(metaClient);
 
       if (inputBatchAndUseRowWriter != null) {
         InputBatch inputBatch = inputBatchAndUseRowWriter.getLeft();
         boolean useRowWriter = inputBatchAndUseRowWriter.getRight();
         initializeWriteClientAndRetryTableServices(inputBatch, metaClient);
+        String instantTime = startCommit(metaClient, !autoGenerateRecordKeys);

Review Comment:
   It looks like the `instantTime` is only used by `writeToSinkAndDoMetaSync`.  
So the `metaClient` can be passed to `writeToSinkAndDoMetaSync` so all commit 
and write logic is in one place?



##########
hudi-utilities/src/main/java/org/apache/hudi/utilities/streamer/StreamSync.java:
##########
@@ -950,22 +940,23 @@ private String startCommit(String instantTime, boolean 
retryEnabled) {
           // No-Op
         }
       }
-      instantTime = writeClient.createNewInstantTime();
     }
     throw lastException;
   }
 
   private WriteClientWriteResult writeToSink(InputBatch inputBatch, String 
instantTime, boolean useRowWriter) {
     WriteClientWriteResult writeClientWriteResult = null;
-    instantTime = startCommit(instantTime, !autoGenerateRecordKeys);
 
     if (useRowWriter) {
       Dataset<Row> df = (Dataset<Row>) inputBatch.getBatch().orElseGet(() -> 
hoodieSparkContext.getSqlContext().emptyDataFrame());
       HoodieWriteConfig hoodieWriteConfig = 
prepareHoodieConfigForRowWriter(inputBatch.getSchemaProvider().getTargetSchema());
-      BaseDatasetBulkInsertCommitActionExecutor executor = new 
HoodieStreamerDatasetBulkInsertCommitActionExecutor(hoodieWriteConfig, 
writeClient, instantTime);
+      BaseDatasetBulkInsertCommitActionExecutor executor = new 
HoodieStreamerDatasetBulkInsertCommitActionExecutor(hoodieWriteConfig, 
writeClient);
       writeClientWriteResult = new WriteClientWriteResult(executor.execute(df, 
!HoodieStreamerUtils.getPartitionColumns(props).isEmpty()).getWriteStatuses());
     } else {
-      JavaRDD<HoodieRecord> records = (JavaRDD<HoodieRecord>) 
inputBatch.getBatch().orElseGet(() -> hoodieSparkContext.emptyRDD());
+      HoodieRecordType recordType = createRecordMerger(props).getRecordType();
+      Option<JavaRDD<HoodieRecord>> recordsOption = 
HoodieStreamerUtils.createHoodieRecords(cfg, props, inputBatch.getBatch(), 
schemaProvider,

Review Comment:
   Should the schema provider come from `inputBatch.getSchemaProvider()` which 
is always used per batch for non-row writer path, instead of the cached 
`schemaProvider`?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to