Github user stczwd commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22575#discussion_r239670323
  
    --- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
    @@ -631,6 +631,33 @@ object SQLConf {
         .intConf
         .createWithDefault(200)
     
    +  val SQLSTREAM_WATERMARK_ENABLE = 
buildConf("spark.sqlstreaming.watermark.enable")
    +    .doc("Whether use watermark in sqlstreaming.")
    +    .booleanConf
    +    .createWithDefault(false)
    +
    +  val SQLSTREAM_OUTPUTMODE = buildConf("spark.sqlstreaming.outputMode")
    +    .doc("The output mode used in sqlstreaming")
    +    .stringConf
    +    .createWithDefault("append")
    +
    +  val SQLSTREAM_TRIGGER = buildConf("spark.sqlstreaming.trigger")
    --- End diff --
    
    > insert into kafka_sql_out select stream t1.value from (select cast(value 
as string), timestamp as time1 from kafka_sql_in1) as t1 inner join (select 
cast(value as string), timestamp as time2 from kafka_sql_in2) as t2 on time1 >= 
time2 and time1 <= time2 + interval 10 seconds where t1.value == t2.value
    
    No, SQLStreaming support stream join stream. The watermark config is put in 
the table properties.
    As for trigger interval, different sources in stream join stream scene 
needs different trigger config?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to