[ 
https://issues.apache.org/jira/browse/SPARK-17583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15710172#comment-15710172
 ] 

koert kuipers commented on SPARK-17583:
---------------------------------------

i just tested out inhouse unit test (which run against spark 2.0.2) against 
spark 2.1.0-RC1 and things break for writing out csvs and reading them back in 
when there is a newline inside a csv value (which will get quoted). writing out 
works but reading it back in breaks.

now i am not saying its a good idea to have newlines inside quoted csv values. 
but i just wanted to point out that this did used to work with spark 2.0.2. i 
am not entirely sure why it worked actually. looking at the test if actually 
writes the value with the newline out over 2 lines, and it reads it back in 
correctly as well. 

> Remove unused rowSeparator variable and set auto-expanding buffer as default 
> for maxCharsPerColumn option in CSV
> ----------------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-17583
>                 URL: https://issues.apache.org/jira/browse/SPARK-17583
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 2.0.0
>            Reporter: Hyukjin Kwon
>            Assignee: Hyukjin Kwon
>            Priority: Minor
>             Fix For: 2.1.0
>
>
> This JIRA includes several changes below:
> 1. Upgrade Univocity library from 2.1.1 to 2.2.1
> This includes some performance improvement and also enabling auto-extending 
> buffer in {{maxCharsPerColumn}} option in CSV. Please refer the [release 
> notes|https://github.com/uniVocity/univocity-parsers/releases].
> 2. Remove {{rowSeparator}} variable existing in {{CSVOptions}}
> We have this variable in 
> [CSVOptions|https://github.com/apache/spark/blob/29952ed096fd2a0a19079933ff691671d6f00835/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVOptions.scala#L127]
>  but it seems possibly causing confusion that it actually does not care of 
> {{\r\n}}. For example, we have an issue open about this SPARK-17227 
> describing this variable
> This options is virtually not being used because we rely on 
> {{LineRecordReader}} in Hadoop which deals with only both {{\n}} and {{\r\n}}.
> 3. Setting the default value of {{maxCharsPerColumn}} to auto-expending 
> We are setting 1000000 for the length of each column. It'd be more sensible 
> we allow auto-expending rather than fixed length by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to