[ 
https://issues.apache.org/jira/browse/SPARK-23410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16364875#comment-16364875
 ] 

Maxim Gekk commented on SPARK-23410:
------------------------------------

I attached the file on which I tested on 2.2.1:

{code:scala}
import org.apache.spark.sql.types._
val schema = new StructType().add("firstName", StringType).add("lastName", 
StringType)
spark.read.schema(schema).json("utf16WithBOM.json").show
{code}

{code}
+---------+--------+
|firstName|lastName|
+---------+--------+
|    Chris|   Baird|
|     null|    null|
|     Doug|    Rood|
|     null|    null|
|     null|    null|
+---------+--------+
{code}

> Unable to read jsons in charset different from UTF-8
> ----------------------------------------------------
>
>                 Key: SPARK-23410
>                 URL: https://issues.apache.org/jira/browse/SPARK-23410
>             Project: Spark
>          Issue Type: Bug
>          Components: Input/Output
>    Affects Versions: 2.3.0
>            Reporter: Maxim Gekk
>            Priority: Major
>         Attachments: utf16WithBOM.json
>
>
> Currently the Json Parser is forced to read json files in UTF-8. Such 
> behavior breaks backward compatibility with Spark 2.2.1 and previous versions 
> that can read json files in UTF-16, UTF-32 and other encodings due to using 
> of the auto detection mechanism of the jackson library. Need to give back to 
> users possibility to read json files in specified charset and/or detect 
> charset automatically as it was before.    



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to