danny0405 commented on code in PR #12615:
URL: https://github.com/apache/hudi/pull/12615#discussion_r1929446901


##########
hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/hudi/HoodieSparkSqlWriter.scala:
##########
@@ -302,6 +303,7 @@ class HoodieSparkSqlWriterInternal {
           else KeyGeneratorType.getKeyGeneratorClassName(hoodieConfig)
         HoodieTableMetaClient.newTableBuilder()
           .setTableType(tableType)
+          .setTableVersion(Integer.valueOf(getStringWithAltKeys(parameters, 
HoodieWriteConfig.WRITE_TABLE_VERSION)))

Review Comment:
   Isn't table version and write table version two different config options, 
not sure why setting up the table version always from the write table version, 
the write table version is auxiliary for migration, I kind of think we should 
not set it up explicitly every time we initiate the table. can we detect the 
table version automically from the table config 
    and validate against the write table version then?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to