Github user HyukjinKwon commented on the pull request:

    https://github.com/apache/spark/pull/12818#issuecomment-216740931
  
    @jbax Cool! Thank you for detailed explanation.
    
    So, this uses OS default newline without `setLineSeparator()`, which is 
trimmed 
[here](https://github.com/apache/spark/blob/73b56a3c6c5c590219b42884c8bbe88b0a236987/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVParser.scala#L89)
 for each row in Spark.
    
    ```scala
    scala> "foo\n".stripLineEnd
    res0: String = foo
    
    scala> "foo\r\n".stripLineEnd
    res1: String = foo
    ```
    
    
    (FYI, actually, currently Saprk writes line by line but [it opens and 
closes `CSVWriter` for each 
row](https://github.com/apache/spark/blob/73b56a3c6c5c590219b42884c8bbe88b0a236987/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVParser.scala#L79-L91),
 which definitely should be refactored. This uses another Haddop thridparty 
library `LineRecordWriter` to actually write the each line parsed to a string 
by Univocity. So, I noticed `setLineSeparator()` can be ignorable.)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to