nsivabalan commented on a change in pull request #3247:
URL: https://github.com/apache/hudi/pull/3247#discussion_r671900060



##########
File path: 
hudi-spark-datasource/hudi-spark/src/main/scala/org/apache/hudi/HoodieSparkSqlWriter.scala
##########
@@ -128,14 +128,35 @@ object HoodieSparkSqlWriter {
           .setPayloadClassName(hoodieConfig.getString(PAYLOAD_CLASS_OPT_KEY))
           
.setPreCombineField(hoodieConfig.getStringOrDefault(PRECOMBINE_FIELD_OPT_KEY, 
null))
           .setPartitionColumns(partitionColumns)
+          
.setPopulateMetaColumns(parameters.getOrElse(HoodieTableConfig.HOODIE_POPULATE_META_COLUMNS.key(),
 HoodieTableConfig.HOODIE_POPULATE_META_COLUMNS.defaultValue()).toBoolean)
           .initTable(sparkContext.hadoopConfiguration, path.get)
         tableConfig = tableMetaClient.getTableConfig
+      } else {
+        // validate table properties
+        val tableMetaClient = 
HoodieTableMetaClient.builder().setBasePath(path.get).setConf(sparkContext.hadoopConfiguration).build()

Review comment:
       I understand that we need one place to do the validation. 
   As of now, any callers directly to WriteClient has to make another call to 
do validation. 
   
   I could think of another option. Basically fix all startCommit() methods in 
WriteClient to take in operationType and do the validation. But there are quite 
a few callers which I need to fix. If we go ahead with this approach, 
validation happens within our custom data source where we instantiate the 
writeClient and start the commit (in row writer path). 
   Once we have consensus I can make the changes. I feel this is the right 
approach. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to