nsivabalan commented on a change in pull request #3247:
URL: https://github.com/apache/hudi/pull/3247#discussion_r671900060



##########
File path: 
hudi-spark-datasource/hudi-spark/src/main/scala/org/apache/hudi/HoodieSparkSqlWriter.scala
##########
@@ -128,14 +128,35 @@ object HoodieSparkSqlWriter {
           .setPayloadClassName(hoodieConfig.getString(PAYLOAD_CLASS_OPT_KEY))
           
.setPreCombineField(hoodieConfig.getStringOrDefault(PRECOMBINE_FIELD_OPT_KEY, 
null))
           .setPartitionColumns(partitionColumns)
+          
.setPopulateMetaColumns(parameters.getOrElse(HoodieTableConfig.HOODIE_POPULATE_META_COLUMNS.key(),
 HoodieTableConfig.HOODIE_POPULATE_META_COLUMNS.defaultValue()).toBoolean)
           .initTable(sparkContext.hadoopConfiguration, path.get)
         tableConfig = tableMetaClient.getTableConfig
+      } else {
+        // validate table properties
+        val tableMetaClient = 
HoodieTableMetaClient.builder().setBasePath(path.get).setConf(sparkContext.hadoopConfiguration).build()

Review comment:
       I understand that we need one place to do the validation. 
   As of now, any callers directly to WriteClient has to make another call to 
do the validation. 
   
   I could think of two other options. 
   1. Basically fix all startCommit() methods in WriteClient to take in 
operationType and do the validation. But there are quite a few callers(200 
places) which I need to fix. If we go ahead with this approach, validation 
happens within our custom data source where we instantiate the writeClient and 
start the commit (in row writer path). 
   Once we have consensus I can make the changes. Once fixed, callers don't 
need to make any additional calls to do validations. 
   
   2. I also thought we can add the validation to MetaClient and call it from 
within 
[getTableAndInitCtx()](https://github.com/apache/hudi/blob/2099bf41db76e9a6e946aa41c318b7c0e18be04d/hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/client/SparkRDDWriteClient.java#L405)
 in SparkRDDWriteClient. But We need the raw properties from user. If we call 
it from within getTableAndInitCtx(), we might fetch properties from writeConfig 
which would have read the table props already. So, not sure if we can go with 
this approach. But if we can get this in neatly, no changes required for those 
using writeClient directly. For row writer path, we need to make one additional 
call from within 
[DataSourceInternalWriterHelper](https://github.com/apache/hudi/blob/2099bf41db76e9a6e946aa41c318b7c0e18be04d/hudi-spark-datasource/hudi-spark-common/src/main/java/org/apache/hudi/internal/DataSourceInternalWriterHelper.java#L68)
 to explicitly validateProps using metaClient.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to