[ 
https://issues.apache.org/jira/browse/SPARK-21289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16074186#comment-16074186
 ] 

Hyukjin Kwon commented on SPARK-21289:
--------------------------------------

Just for sure, 
{code}
curl -O 
https://raw.githubusercontent.com/HyukjinKwon/spark/264a1dc603164bd264e0c084608f31ffb8ad5f69/sql/core/src/test/resources/cars_utf-16.csv
file -I cars_utf-16.csv
{code}

{code}
cars_utf-16.csv: text/plain; charset=utf-16be
{code}

> Text and CSV formats do not support custom end-of-line delimiters
> -----------------------------------------------------------------
>
>                 Key: SPARK-21289
>                 URL: https://issues.apache.org/jira/browse/SPARK-21289
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 2.1.1
>            Reporter: Yevgen Galchenko
>            Priority: Minor
>
> Spark csv and text readers always use default CR, LF or CRLF line terminators 
> without an option to configure a custom delimiter.
> Option "textinputformat.record.delimiter" is not being used to set delimiter 
> in HadoopFileLinesReader and can only be set for Hadoop RDD when textFile() 
> is used to read file.
> Possible solution would be to change HadoopFileLinesReader and create 
> LineRecordReader with delimiters specified in configuration. LineRecordReader 
> already supports passing recordDelimiter in its constructor.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to