Github user tdas commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20958#discussion_r178646725
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/streaming/DataStreamWriter.scala 
---
    @@ -238,6 +238,10 @@ final class DataStreamWriter[T] private[sql](ds: 
Dataset[T]) {
             "write files of Hive data source directly.")
         }
     
    +    val isSocketExists = df.queryExecution.analyzed.collect {
    --- End diff --
    
    I see what you are trying to do. But, honestly, we should NOT add any more 
special cases for specific sources. We already have memory and foreach, because 
it is hard to get rid of those. We should not add more.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to