Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20369#discussion_r163751286
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/streaming/DataStreamWriter.scala 
---
    @@ -281,11 +281,9 @@ final class DataStreamWriter[T] private[sql](ds: 
Dataset[T]) {
             trigger = trigger)
         } else {
           val ds = DataSource.lookupDataSource(source, 
df.sparkSession.sessionState.conf)
    -      val sink = (ds.newInstance(), trigger) match {
    -        case (w: ContinuousWriteSupport, _: ContinuousTrigger) => w
    -        case (_, _: ContinuousTrigger) => throw new 
UnsupportedOperationException(
    -            s"Data source $source does not support continuous writing")
    -        case (w: MicroBatchWriteSupport, _) => w
    +      val disabledSources = 
df.sparkSession.sqlContext.conf.disabledV2StreamingWriters.split(",")
    --- End diff --
    
    ok so this is only useful for built-in stream sources, as the v1 source API 
is not public,


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to