Github user sujith71955 commented on a diff in the pull request: https://github.com/apache/spark/pull/22575#discussion_r238329995 --- Diff: sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala --- @@ -631,6 +631,33 @@ object SQLConf { .intConf .createWithDefault(200) + val SQLSTREAM_WATERMARK_ENABLE = buildConf("spark.sqlstreaming.watermark.enable") + .doc("Whether use watermark in sqlstreaming.") + .booleanConf + .createWithDefault(false) + + val SQLSTREAM_OUTPUTMODE = buildConf("spark.sqlstreaming.outputMode") + .doc("The output mode used in sqlstreaming") + .stringConf + .createWithDefault("append") + + val SQLSTREAM_TRIGGER = buildConf("spark.sqlstreaming.trigger") --- End diff -- we have so many configurations, i think in thrift server scenarios where user can open multiple sessions and run streaming query based on different query context. each query will be requiring its own context of trigger intervals,water marking,windowing. can you elaborate a bit how we address these scenarios.
--- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org