pengzhiwei2018 commented on a change in pull request #3387:
URL: https://github.com/apache/hudi/pull/3387#discussion_r682621582



##########
File path: 
hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/hudi/DataSourceOptions.scala
##########
@@ -399,6 +400,11 @@ object DataSourceWriteOptions {
     .defaultValue(1000)
     .withDocumentation("The number of partitions one batch when synchronous 
partitions to hive.")
 
+  val HIVE_SYNC_MODE: ConfigProperty[String] = ConfigProperty
+    .key("hoodie.datasource.hive_sync.mode")
+    .noDefaultValue()

Review comment:
       Currently `jdbc` is the default value for spark sql. But for spark 
datasource, If we set `jdbc` as the default here, the "useJdbc"  config will 
not work. see the logical in `HoodieHiveClient`. So it would affect the old 
spark datasource job.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to