Github user koertkuipers commented on a diff in the pull request:

    https://github.com/apache/spark/pull/23173#discussion_r237663865
  
    --- Diff: 
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/csv/CSVSuite.scala
 ---
    @@ -1987,6 +1987,18 @@ class CSVSuite extends QueryTest with 
SharedSQLContext with SQLTestUtils with Te
         assert(errMsg2.contains("'lineSep' can contain only 1 character"))
       }
     
    +  test("SPARK-26208: write and read empty data to csv file with header") {
    +    withTempPath { path =>
    +      val df1 = Seq.empty[(String, String)].toDF("x", "y")
    --- End diff --
    
    that doesnt seem to be what is happening.
    
    if i do a .repartition(4) on empty dataframe it still only writes one part 
file with header
    
    if i do a .repartition(4) on a dataframe with 2 elements then it writes 2 
part files with headers
    
    so it seems empty partitions get pruned, except when all partitions are 
empty then it writes a single partition thanks to SPARK-23271


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to