nsivabalan commented on code in PR #7632:
URL: https://github.com/apache/hudi/pull/7632#discussion_r1072765472


##########
hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/hudi/DataSourceOptions.scala:
##########
@@ -455,6 +455,15 @@ object DataSourceWriteOptions {
       + "This could introduce the potential issue that the job is 
restart(`batch id` is lost) while spark checkpoint write fails, "
       + "causing spark will retry and rewrite the data.")
 
+  val STREAMING_DISABLE_COMPACTION: ConfigProperty[String] = ConfigProperty

Review Comment:
   inline compaction does not makes sense for streaming ingestion.So, only 
option users have is to leverage async compaction in a separate thread or 
completely disable compaction w/ ingestion process and take up async compaction 
by a separate process altogether. 
   
   So, given this, not sure how we can deduce this. bcoz, default value for 
"hoodie.compact.inline" is false which means is async. Can you help me 
understand. def interested to see if we can avoid the new config. we also tried 
to follow what deltastreamer does towards this to introduce a top level config. 
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to